var/home/core/zuul-output/0000755000175000017500000000000015144563142014532 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015144576147015507 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000322171315144576061020270 0ustar corecore1ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB >lEڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5օ!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPFS]dP>Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/X_]F@?qr7@sON_}ۿ릶ytoy͟מseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזL c̖F4BJ2ᮚ苮p(r%Q 6<$(Ӣ(RvA A-^dX?+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDAWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@Z#"U4Xk1G;7#m eji'ĒGIqB//(O &1I;svHd=mJW~ړUCOīpAiB^MP=MQ`=JB!"]b6Ƞi]ItЀ'Vf:yo=K˞r:( n72-˒#K9T\aVܩO "^OF1%e"xm뻱~0GBeFO0ޑ]w(zM6j\v00ׅYɓHڦd%NzT@gID!EL2$%Ӧ{(gL pWkn\SDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P߼fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tT q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JXOz);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>q=$ts,cJZڗOx2c6 .1zҪR "^Q[ TF )㢥M-GicQ\BL(hO7zNa>>'(Kgc{>/MoD8q̒vv73'9pM&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1Qzi#f>夑3KմԔ萴%|xyr>ķx>{E>Z4Ӥ͋#+hI{hNZt 9`b˝`yB,Ȍ=6Z" 8L O)&On?7\7ix@ D_P"~GijbɠM&HtpR:4Si גt&ngb9%islԃ)Hc`ebw|Ī Zg_0FRYeO:Xy_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟt[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~7u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-d)Kl1헐Z1WMʜ5$)M1Lʳsw5ǫR^v|t$VȖA+Lܑ,҂+sM/ѭy)_ÕNvc*@k]ן;trȫpeoxӻo_nfz6ؘҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lhTm' nܡvO+0fqf٠r,$/Zt-1-dė}2Or@3?]^ʧM <mBɃkQ }^an.Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY Hsc`% s8,A_R$קQM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_MqKJ> QA^"nYG0_8`N 7{Puٽ/}3ymGqF8RŔ.MMWrO»HzC7ݴLLƓxxi2mW4*@`tF)Ċ+@@ts_uM Wi·yT"\'4;/f֚Znϻ-8yݪkIf-8>V#ہll/ؽnA(ȱ0]nr CC5`F`J `rKJ;?28¢E WiBhFa[|ݩSRO3]J-҅31,jl3Y QuH vΎ]n_2a62;VI/ɮ|Lu>'$0&*m.)HzzBvU0h}~5[Z!]nlnݔn,?WTm>C9O n6HNe">0]8@*0)QsUN8t^N+mXU q2EDö0^R) hCt{d}ܜFnԴ.2w⠪R/r| w,?VMqܙ7;qpUۚ5Tnj ۝jlN$8jp*:'SOANa0ʼt9ƟE q" z;z>%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nv]pEO7}&gbXԈedKX :+Z|p8"81,w:$TiVD7ֶ]cga@>\X=4OZSܿ* %xccDa.E h :R.qɱMu$ơI8>^V Y. ,BLq~z&0o- ,BLqfx9y:9244ANb n\"X>Y`bb*h%)(*_Gra^ sh6"Bz} Q\E)+u>.,SzbQ!g:l0r5aI`"Ǒm O\B!,ZDbjKM%q5Em(>Hm 2z=Eh^&hBk X%t>g:Y #)#vǷOV't d1 =_SEp+%L1OUaY쎹aZNnDZ6fV{r&ȑ|X!|i*FJT+gj׾,$'qg%HWc\4{=u^K+Ȫcv/w#MivX :)ǪCZ3ʄ55[c&-WgUۿ$XD'QǛU>UݸoR_å~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb mw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^7eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j6Uǩj v RQXw&5fDEAV$+>*`[>>[raƸ)?V4wE0ϕae$Xl'{?]B؆բ+U=@/nf"db'_Y1X?m&yO/D0)lp 2\.4}L \5޷ToEχi)i.ԡD6Xb]eSj*+ŘOi/c0lT cihU9P!`Nz TtƩG /Y1puOSvBۢPltdr$i+tHi >] bCD6b&>9VE7e4p ='W6M$oeB,޳X$I6c>EK# 15ۑO2Jh)8Vgl0v/eNEU"Ik dRu˜6UǖŎ<عLۀG\ 7۶+q|]RiĹ zm/bcK3;=,7}RqT vvFI O0]&5mKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž]A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; b$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S kW`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&m.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+:8b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7u&ӃCx;xLh+NOEp";SB/eWٹ۽XJ l0VM_h#ZYBK]nc˝߂~m[jRuo[|["w?2YtVT F*OO '+EoW&DqJ5;OU!F>j\FW+[3=`BZWX Zd>t*Uǖ\*Fu6Y3ͯ&i"L{vrQۻou}q}hn+.{pj﯊溺^TW^T\*6uqr/^T77WNZ7F_}-򲺺VWQ77V\_v?᯿sHH?Et _I`[>>0ւczzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k޵,"e_B&X'N3`d7%y /5~HJ;-8b%sdtbu]8XɅLQ4q'[JP&9Hp6E)CU|N-Ka#8T(+ ޑeQƕ{*Дc iT2r Y$`wpQ2hʪ|nLe=̊ۓKy/yH"0n"[V T샵Pb5Y@NġMX.=H fƃe-"uo&&N . o~]]U#C~xa$hƸf/qև).q&Ჟq&9Q_y锔ay))[1SRݩVGwƗFhKg2Z{Heq,d/+4H2䍀zr iG 4h&X"k -`p-EŽx(Ik go,5yYO<#WY{2zNLJݕ z˗;\؞S!VA(la{"xa`w'Yd2xAǮYs,/|~Azx?f4-tF$0KO7u2O9L/IOS#D_>ħ K=; ah\׸ξFawG,dakֽ:Z]oRh! ɹ뙯Bf?MR7/MRw/urƖnx6kW%GE3YuA [@x#b8޴<>ۦkZe%BɁں{}zqȓnn29/|#_*t_^2Kc,šB~,?f+K] []x)2~B;":(KX.os,Q=q#3,Pt1)yX L߫KY DD [.x=OTb\cm,ⲆRkEwRe3c͘#,GFhgnZ° ;$<ߌBf~2K'i y g]4|=(Qwp":lMU&yȓIg-Ȭ.yQ<}4|0Ky#a!6 -᳑YJP8DǠsQ e6x-ūq5*xSPrT&tQ=>U (c:x4r4z="IͫjҮ%oIU{#(Q k13lSAZinHM4e3w, 3&2$9>#QJ.䴆7Iq棬j*g)Ύꈣ;C97>8ߐ0&Uoc> 괵c4<) swѴʟ $Lfw9}mlʹKX. л ɖ81asJcyߴMnmQAђ-WG ~$Iwʰ)9頻%Xֱ֥YsARa`zϲ8hooO3}z3\0- xug;GIވK^}Qu\ V Gmu\X<>l)oyspi帻gpǏ6.zޝ]@Ûoc(gyZrv|6ǯhXv~ϛZ$z4(a4Re7eH$, 3gYzIowkޙiaHHn cPxeiYq[~;}LV.4 Myf[KDՑ'ha>Ee!6,y1ŐzW 7Ht'IϐGp աx+ipnN:O0$(&gUȋiזp[]2h@q%>.vxK}͗Yoj2 Oy*1q,h:å?-Dw#\.`|+ba ݁w]Ԝ`97ښhvDiw2;4H8G\3"$t(. C_+ȻA ޝ9oAFOߐM<"q/>=4.GDʹ@КăaZng4o* WKx|bQ$}A@KL ⊖i\:Q֎ pIDC]`WGO*\}wZ;^ѐ-eTKq?>:I bh5w_}D+ , McQ bў>>eN*LPw2)y&tߺR#^h3I!g$4ξ*+|s*6#aSm~{4%ۆToѾہo)60bNӉ|5p h6>nRsTZ9I^rGeUXYx9 AY"%_I2a=! -W NKM0F׷JswuؖoO4ÿ> D@lK@ol@mDhOeUVf4k3xfE:Vɾh0X#nO q6&J+a[WN]DvmT-W4Kꎽ=|vmCDqPqVk64!!Te[A2hf3x9>e5qfnE%y! <{ۉ~:SPjm-ШvQ@:hʊXq ]߆*)wRx` gxDݫaJao4x K$Ԟ%Uޖzuʵ 7gUܽNL % Aٱ@2)+Byy -R3IAjm"*"5}gK?L"^Yu4{[P[5%#jJ!{T3w] &j]ʳpt> D R<h;mٖu[ ʹULk*x{e9l4|phjXnx9X(M+~ۖr[dfA!Z[RgOmֽk+QTiwvXo}"hHlf+ R JkjLB@ Qu_E<#YZS&D+fbMZ+iH<J 6ΖXozjXXr|nlefUg}!ZQ(!rM#f LAM\jvt1]&:SSmpacڞѶ|E'r|W er걯Oug𓼺q9s UZvqmMFP6t--GTBhp`|'Tka,}|VVr~.&yXh {kWsn:krl߷]i܏",/DFzoF5,<; ,+d Ow#29l&( 9.ۆ lopsVYIUS~_`nI* 6\!u:7f\Cz_jlz萖grci cW֮8dm@t%*K$!:]MUѽC>RUNxCDn^] .rTfO;8xmN>TE{Dp|Qٌ㣶=@P*X"iI+O5a(1'gTCf00 `c /CęI>܄R f2kIH?wH Akj3}{_Q2 <8{3Y9T'ETwѴO1iZ?|qtc__(X~G~N )8Չ>`1dR*JOՔtv:e/TS^̧H֪=xlT͢NY AhUu`|V'D1ߎZ: +vHsw.v6ļ>yZ.:<#u@ -Ga(~#/n#xy:<,wp-AċXNK>451~8P8qY ##9%V.b~7 `&[zL,ne0fvknp9l]b :@<蚕@ s^$+,^:2Lchv1b 3Gm? 8 P0[)dpڞ+';TVtq@n+pОG;]P E7 1~FFA1C(A'\JK~uš8![V<Td2b`kpiuY]Sgn^g 0 gx7C2?Kt- ~G9xY\f~srf{wq gQz3{qgG,.D' wŨϹn9s5 :B*\x]pqq+:brnp}.9珙Ǻ{؃ytͅө  S +{bq!Ν50[vMl @[sYָN oBҋ;rՒ};m7(t;2߹t]F{xڦѽhW! k-(gpohMس2+D }%N$ B<7aжͽݭ=pm<܈J۲?L? 3{6RS1M<M(t^]pDGq?}+,U\G= o}}y`p՚7h'@LNѪ=7aly ;z߼oG/Ʃu4ΠHYU_uE1@u5CybcFB[p|`a n1$V֏D0ApؘH^h0ڼ{#0/֕l*>x3){2bBmT@ԡ`Y^,[+Qtb :.ڻ R]E[F{ֶ(봺G+AJ~VȦELmZn 8mA3qHLгMLr٬a\~8z.Id eڃ 뻀v-sK5V83.qͷ~s*p=9@;mK9:$?:Diii("=2QYu'C-B~3 NP}0E!(CL~™pTɃl2Ud" BՒPaa0TA)'2ۋŀQhJFj~x.xLH?PP6ZZ+TtE|&s8yc)O$ $ANx8aB#z6 "JqɚDkdBϛ@ t `XռbK=Q-dfzØ?冖uD[*r[jn57Qof[sF0=+4L|Yi67`GI*n^i(0B' S&OC1-(q$Yz+b+ڢJ>Y2_mxe69 B~MoqHiPqo[R i/sHt ]1@ۂ1ChKFۤg9.#_M A骠tAO"Z]>S鞽Jڭt~]؝z)]o>sw]*-AXl?Q%e;W;w'(|AŪbAOT *: lrà\`P,EE7ñ_>1UUEӫ,KEC.q*EӲZ0@,CǙI?/4geV C2@S-v^tAc -EӢ.oiWUz%"/0ߪt0FN{ 6#4Il ,@i<#CҲn[/ԿZVhNUHUjn8A ӯO rG2@h \?eԀTX’eCl~yJQc\TI3|N4ZΠ?NG| }\W`ZvxmnK(_( i4Vk|<^39l{ӛtѸ';i&"f*r?qg-)X_-c q܌br(57QOs)l C`>zߖQ Wi8&*_6dM#BVao/ j grWzmC&Hrh{I&h.'4L",;+%#1ݫL/ {&Zǵn ;LĬ{Hβ5d ;p@ <8zs!]60nT[_Zj}ds`rDopM挊"AdD1 tuL 3p{If,8l^2f7UP.f}$Ja/7Zߣ#-ڂ(L,eAД|:*`T``RL؅ a0 YezL?}`Һ<5l9y^'༓ m }NNO˛D,O<[(o%@.MH?BCUug2Z5Ed7J!)0,$c~DE.o}~c8HV6Y&W(P0N`Lc4tReES[GXDpz 0m˸>m!E1 ٣}ub.w|geÅXM?rptBI(/wa( C5G4;o9`g-2r97_DDR!pjo%z1Dž,Bh| i,4&+E~sF7~حovv|f֕x٘KzZJ33$&5rCMVST05H51h>ߛ,+\='GdsZ9gel)MhR mF|Y-I'EL 3=vt~{o_PT3ܺ^5QcVsUklŇu M8ilj= \*$\G-re/-BB&ZzXYd22OyVƺQ`/X"FͦGvՆ6lϣ6GhJWsPZP $ #E1Ȓ"l+] GmrMgߕj?&ѽҳH4!=|S@ZXUk:VY.%8~=pEqP?C{y׺ؑ_oܗ , UCh#𻧚b0.6(x>u׽>Gw(w=χ? N[z/9frJ*8׿|)WO?{d"l_:^@_|I^/ivs/]Ew 6 :c{HcW!/~lɫstɟ]~cW?'ΠZ_kyc 뿽L9PhO5E^(·3~wn-+=k_Dj R_ž*CVⓋPJ/FA4AzzR1 fR$ط?~hϾ@t93r?\\4A0ПET7{-/tlm㲣mFY$` N䖳Mȑs$ԑ8|[}gi2ETrwJ^Z'8Q, )!.6MʬVI f:HU÷<Թa>$0(E EH(d05&zaG|y|J؍C5,҅#G ?<จfh+4at&mV al^Vq9bO8x'ȹڥ$(SjCNV(ŀg9 ߭ gґ sB*4w\dZ z$ؤRn4GɍnνE( )H(nkhhN"Gn®79@)_GQ|UPCTs\QbY)4U[dɖi9N.Wu6RL\&YErrQ .7GԻGɜ1Ы?YT}, ґRR6m]3915/d IjBP CAcTv@[R@d0w"9hE]7ºfJ͍ U P`Ԗ0$nrξȜ4U)ڠ콢F;I'gA3(^Z&D'zDP w#sPV#W瞣3aFR`4%ClLD6Lא#i pF΍IMAGMF52~c LȐtjvnN}dOp\.ȜRDNۗۦD/ƕȬp$@n;G 'WJAX-:md_hRaj, sCU{^i!GOj+h"CE&EO.Ikq;RT(6 [5 / D)R~o}=js-:OFZՠ9eȑ4<*TVk{dZFYeI0]ENwK9FL~,9q  $b幝".ī{eZpdw8@Pk^:W3r^ "lu^s23 1½s፱("̹SІjRbMLLE(t!G @dD=)Vȃ2C4ơ"X`QVCOoFZdU[06$ZM)p$\T`7%= V*d{NlfNq$A̮P۹1(d/`>x#0V8 I IbhX63HN+I'\s"7uY{? SDhu$yT5ř9@8y.gid?q7E k~lt:qvuj΍pP *>m`` e Nn2< @7 ͘:◲~ʫ1n!ykJٌ̈ /@fTF&9)hW3% d"s +(C?@Hm~excX:?z_)/5{*Z:4G|%ZZ9.&Cf&+JEl/Uwi|;E.xjdG‡:٤ JT/ s^7ybriOnfrNJVЍVLυ JT>+ؘ#i[0䞌j J :g.&ϟ7k8y\2E5d\bM檶FP#ϑNjkCHKyQԋ-?I}8o>.k pbo ~w"L N<` q6E$VwquME>-rfEsCp]cc$ Ssnrm뤉53&[lzCT3awqnv|{?܂SGnvB#k @b+Y!:Ðtb/OvGtNAB*bäBJ Ut;s^CF3x.= u8v0;bo83i. Ólµ}xdQ)"+Hl%]=^H/mI'8G7Nŋ\gd#83&Hrrc[҅]8dܩe1^nCp.Ij dEkJU2;V9Np|rk'ϵۺӺ?~Q6t0C@Mq^7AKލ2#w\5jK}nu[OLכk?ɬ EbDY (RJUrhU 7=ooBzunfɈEg?g2d4TQ {Qe̮,Zz |dfJFO:x_s'yNCp`۟dKa+BisY ѕO=JR[d-4)XQlX$) 1㱛1ys[WDloPVX?G|nfWݏ0H=I.mrQ{ dUpr䖚qn Nf?w}9c'|yf D)Yt RPa@L 6+Emdd]15>1$e 1\@ܙ:Aۨ؈ EE%ķyk_bt/kdUt%yH1CRS߫͞*'>B(( c%8-F7:d5jOٻ!Z'Gx8^ܖl;tV_ʱj<9nBBudDW&l g1%z0ے #L1m!tSx[\TiQ2TNr'y,(9~N|uLfw_9yOιœoS99 YlkVdzػ%`48ZE0} Σ\;y5 gt\l :Vx2F5&S2N g#浣}uj Bj'ZFYE EΞ?oat%uM7!Vq[&yCU&9b~seξ8>_3n)o ?5(s&oxB*v&Aq?л1$#v#f]^E½?V/ aOgR&GŌ&Ly}Y뙎g7Q*ԟ@7yAGt9Peϒ@VrO}a12}#? }sˊ10|s^r ֣Nkجx37 6AJ˼CVf:|D%^T"Id}c1qc71In}tJl+b%\֦j4bhHyC(#$}!.36߬WCmFD"rXj˘Zb`/dr2$ӷGX+&;gfq?CSzo9Z}d<4mꋪ TV8% 37+D Ť@4zM6QӡǭԻy[5wY>􊢷ɀ!nv؄B~Va7YJ=ۗ6.4O:,&rRRH3UӦa(rCv6?!8:ZtR(iH|6aH'e 5.-8'{wmd=˂ޘ<9z+B-cf7(Eh:sWE<ˋ#i 1IXt9zN^ G3 .Y%\Ez ԖOO!8Nv]̱Ȱ2ws|fՐR#/{W۸Bd[vOƘC:kɢ͍%wֶ~;8O?.Gdd8#-6Dz^9dW%&!ؑ{G[)s8R|L7Aq΄8^JyY|^`0P5Ӈ:̵t8t5׆ oF~oi7f ƴ' 81vF? 5]'`$`֥T457Clnݝ`9թ"#&Кv:+^[|ճ=?jn׷qj~/|*֯'UM.\|IQvw睷 TbnG[~YdY'Tu3ɉ qI`Ĺƨ8kuB#W;t| {0E:A xhPƈmKA/x'YkA*cFc\=&G+@{h?qt۵m?<5tZ9X B[)ŕ o:&} :Tk潴L&X27''&^P蚝` [eqDA߾ua;'1Fz26i_pчph -$YPVn<$q=!Ner쀙u5q}h{8Hsg SlW=t/C{}di>NnnyV+9yL$(br Q>]W+w)'̛uNR$%=3gv:(5=g~$(y\e f_ 󺚮%,VW;'`fW9 +']Xuvū}{=ʛW.NkP`ceXdW/|ͳ-. nfΛ'nS\Fu?%EkhLsl(2vќ!0O3D|eg|n]k2?ln7Y7%Qed4並4ol6ءp=(MLƧeS`{bjLz3Ɨ,)0 ś?m4qS`4;;6.?ZiF1P|hEdnV_W|tn>I"(B\*N,4ߟ//?E=bz/] vV ]ܚۈ (!(qlq6R2Jk%R;*|i;.ĥ]#M2rqo"%J$RPGRT)/;mi Z JY ,7&u5Z}0="HVYFcGHus@- ӕ}c;4¨57G.-=8 :JwzmblYfxwGnv-ԁd)4bJX *dD1}S?.ɪoA!eϢU$]z7i}ȝeܐM0c5fhYLIM[AAM@sg-vp۵ C9BT.E1򭘾i@Ckf| J{  Bz'~?ukJe  mΟ-A%NkbKKwvFԝPގ;5F0w - [Y؁n͓ZxP*6zMᯍ=ߴzDˁ. /=̡Io> heF9.lOSŧAaċrL~.Fbcy>+JoR(Ѥ͏A`7'-1 J {`4ƉpR4ѥKoDy)4B!l9WwPvqvoN޸? mRY7dଐUFF5ami*PDTp1Wd:+3jmγ>_dWX~xNӡ2GsUh4T&k*N6fJz}9f1oJFÖa?(r`l.4j.qXw rֵv|p bIw|+dW>~|rr`Y#5((@c~@d?g |tѿtomW~d?376{َ|ˋBi:G浨sw7>]M3\5|]d@M(OCɝ/:bGy=qvwv8@8Ećˑi=9(ܾ>T S+ ,s 7 ZIZY\1Awtv֋͂VALK7 JHHQm䔉GFHnhtweYٓ;7X] PEs|Y KݔA2T` o;n U~+J(zL r*}_ko#,-b΄u΃VDcCqZ,'KL&$X+"k$B)T2~t ЈĈXˌ4Z ԆQ`:b)lXDH N734TDN32qÇB 8$GIb6)(&K>., AC1DRq$(0%4Is  #8%'vnU"|GUX!gWdk+HK0*iK$>J>6'ճfہ͢8$&q. ZjI Hc[9,Q<-9|Pח;<-s[m>h|F:0X%56W4-pʹ f(B`EЭ{ٝ*]'seFԭsV(Q8}7mKՀ>VMżu۾KS$nUsb,Ycb]2Db +`:DLXn(CT%Dhm$!c&TKX*\hh"$.Q^Zqw39˓gGIh p0QRR>zH!tJрb,OM)K1hLV|k*ip3T:n5%i#!tCYsMÜ&#E.0#x rX*Ú {rl%݈FǷ}t75ž%]F4$> 0;$W߃, Ր^Ʌ ChϞ\ \< OmVY[ӷ6Ϝ8pb 8ACxmAjyG<ƃ񓹜J< ֨^Ijˑs3#cHNY LXDty_TIt/`R%z|,5W~mCꉺ^&ʀ]*WZ#17Bq<1,HQIn81# b?]ZҺKVq'[ 3[ v}!Z3nW u0/n`zl>z Ld+va@_,#[۳Zw8+I6[LΜϮVWc+O]o`O]wSJ8}e\,]nq(Cbޖ8*B?73~0w )S)C-B[Tq+D^MAëwXx'UUx%usL,GS0yo"2 7!A-&0)oJ7L0}mnh5#`2NnB.1 0.Rͼ{BSXf>gD~mE*C*n2 `3A͔0Z { 5Xe ˰4"ٚTZ( 0UfT3 _am33$|eTI*v u$ IeYh>D[(HN`a:Ӕii='+sjr3o&'b2$$ܤH4Ds pG RLBׯ>pIV|P4#.(LRCűӁTb1ڎIgS*O=9eDO O :YRflH+ m7mO&T3Յifع :Ѡ\-5U'}%d>n@5l뚛:k._%W-X,WFZV#obJ+yJ5<%xȵẌ́-+GbtGO3=9 J{?n;BC]lW!/*nCqrؼ7h PCl3f56;ɜpIP]# p}Yw: )18$%xk1cB=o5)^4,Y?wjk`A:!$D68Qc]&C0螓>d4.]wORLmhXuD6+QsՌ8aX;А$ enq ,-h+~0!7'ݖ5iQN!7\9.G4dU\~,?n)t%"5Pn:OP]s7+K qy]dYZdp1U$f/.u'4X-JU (. X&Cp܆!)F˓R+Hqdvs9(^ČosϾ(ĬzLr7+E9~M-fdXvJJoKbyG\3~fF$c[O1wYĥqA]kJۦz+yiݘ}7*N6 +\ڋh< ͍` FiX"W3Q}IAg1!*gdR uAWo+*zzy}CXhwO: = 3}ܸq^˙n7EZc[|>cC7Ph]lk Lk恖H$qS~D) @Rs0RIʞJT0ֹXiXJ#nJ} i%s}ԛEnk룔mLuQnF]qfC<]룥97'Nγ zkg6D/kI[R 4~ 9WХIZ$< RֱۖRo*~|*YmE>*ֹ"`|ZX"tсXzQ?3A$ibìŌtl:u΅K"aE~EGC*MD(E ߔI2I?kw :턒֕ cףuѕ¡M^wN`uY|:'uz-"Z BAy"G=pR4F';*o.%%/PBc0!+dV+fL]nهRE+{#IQHg}?^lezh,lk͖͠1"+,+h\6$ oR\LgrAEu3wP8NEʧ&I9_1d;M'F?nfr klV-Me*NPu/M^}4[͎ GIb2扁'd4w Js\VTzT[`b/lvm(|<HSmZVOQ؇1UIR1v볥w&~J8'w,bY2,rBvXƹ":vi4cEQc2RZJI܅!I@pA BBPN,HJ+\s꿯ǯ&\N`jXF< Ϋ_y#g-xώl3Rk1#3"`̦WݰɁ/r+w+JDDԭ]g͸4f8kCrRعzulODz:6?,=e<$&<3`gaRKOiaFwJxgsQq#N*eTv`s,vaæH199$fG]%⡈{6wTe1 Ȥy% ۜ ̚|\‹x,Ne)v爐G7L/>kmI!U&4!vHt{qˍ&e:d`ye^*&{85~]#+Q8u8+mHa0.ֳ ?,<ÎE$%u7>/sBa;W[% 5ngK ~+ݵDx}/Ob;ʥq<~rί.VcBN?hpV[Or}o{>&4`"5=#ܬnO0ޮֳo j=d؛-{7bM__╖oc W|D}z[d2GY~^XܣO*}Rt?w\r[gOxN צbs1aR6y \LREB-*bOo7-p̂ʑʐ.SY#B6bp.ijHQxn)slxsF:lO'0M=,K|KmqLlu?]ܞ*_7)uz]W@j=u\7WU{s\7WU{s\\5x\(7Y=.]Td³C<vZuWroپpIY~ɽܻY-?no/,Qg&][86ёO'o{7rF, 1qmß}'vamh(⤣1ήڋ@b;FOwHqf 7u><6"$(n0O(?'>g&NZu!|7\H(j̷ȉLVQ#<GJ>~Z+p˝ШII#!JFf5fPTHP%n^Ww;l-`ܔ!%I<<A'>U% @VJ! ?M'G1"le5+8SrH xY#6smȃtE ꀊ b,FR,^&S"uNZ "( nǠbu@ʼnժ1O%4kGMm[;M8KehL$=6ctt lj\s6<ۼ 7$Vxv$HShb`42F1KPcap{wβ@H&2|,wgc8Ք M_!kT>w~I0 ${H1c:GBj8nnm*a*9t{૛m)[7#)s yN<81m4mE<0r'n. @3JK6ψ4 Q U5[~r1 5J|,ԖWCW(i$N'JP.qp(OA ꀊ c\K~@{jtn>\8GFѿ5@7ҡѣh "L*H#OLwBcn [wǡ脋 PX_ <.L)-rrA=bv9"J|,r:KZGPK<'3RuN>цt>Gbj'Ԏi{ʸm'S"G 3=6FEmHل']v /6vSJmT𤩊!czC>P Gq^\%^b,_6{(XΉUS.R0`VB&Hu"=63g3J|$䂉\dHI˜.gc$!fGXMuP#C^켣m?{_}Wړop ’Yor=Ӂ{Q1q¿{,oArb;%M'zHG+nXQ}|d^r*뉘1{˃$:J7!,35QC\̠n`;J|S"g98yd>KkKXI1:XGb&\;(8>?\kyjkjHSVbq6IAZMzl| P#!7bSN^}勗L}T\1ܲ# 8̝Nxቦ<6=dD & i>J?Mr3 ,pkm۞8d"]paۣh);eN۫i4J|qꬁrvFXE*kb❷f[{=5z{Wn7rZ e@O[-KETX=>5̀JD$%RH7Y6R,F{Џ=G|gqprIP .y1/J|$R#g-=՝ *B+A"xP 8b19CG 6,NF[Zeȑ61 j4ې|gC"/PTOWD%:AA`-"f~Rm5)ClLL S ;VK \UEPFD_0fl%>bKuM^iOª9Q͍/jq>.U>_վ{e@ TLlfR.J#kip(ϔŒ ccf)&FᠥuExB(u'fyy3R58+&˽"i> Ty[-Aٹ3++䉕mb2+8R iRiΊ ǣ3M\ h/\XC͹?9 p(YA+R`(/,IʫsFP}HUƥ0r=]{ E8#a,wsVl@&ȱ*bx*J|$VUuA\;}lWaštw$EZNHX܄0;v7hE ;j&W`^Q8;__ֲC*cA~Pf1'@aIa .ZҤDhS:)(񑐋 3m,@\r5k5H۶L=#q<;g- P- 0qmjl[ERMZ?A3dǢi]=7x\hckpSǧdNTgcq Ġ.(N(ټt$$MU7CŠ`y#iTW #g,ܩ}=v*섡/8^B9ӧIy>u `Z| `*gy3k2yJ.J[OrLxQ"k4+z  _j6ᢽ-I7/ G Vh',O*1@G+Z KX Ȃic>ZA}QB1W5=6*rPhru"0@jEcdZOM}e{fL=~,i^:Ʉ41bPMX.(0 72xfL؛;ʪO{'0''eOU0YL9'*^q2aOzUde\Y_JKABokV2|TU 1:i&Zu;ܮIt N@66QqjᲪZ֧*fm35:n[ rT^~*FJ9ꃃ Mf6mfCdQA[kYK1X)(i}b븚l|Б쵱!%/2ဏ䔁 t J|$Ns XovtX)4y}Q{acr;ϠkEQqAǹ/Zr7ZfOH`P#ڦDdҒ”nN,E}6n J|$BzV?۔ۨUϥBGww+i4Oy͓9h=< 9PPQIP+̈pZN?E f1:1J{lT\ԙ 2X.7,qZawβb|4)\cBaVS:aOH2ۋh,}=6z)g'wrez7+e]y\]c_. 9?ɟ`++:39eR1{˃P9!(%WQ_=m#tt!G\Lٰeuet[) uը r@%MT?5?QYR.KODpF]=6ZGB& ](O7{IAT jzCS9z:6yry5dԆL')A;s`D40cm45bh7%>]Ϸ$9_ h݂Qo<0vR/8]Eӧ5v;aq_4ߋ6S`Xʇ&ϭkR&_4bFwy"J|$nkJ> ڤQqZ:4fOQ؄j">ߔQú s̄!,bp%[3.U5JGMīę`UeA- zY_%0>K?v~JUErIJU|z8$"gr3Ɓ-vcw׽:*Gx;6>I'|'KewLο\P5gf,:t%ZTJJ%F*42V"e4Niȧ٣E.f] Uqov/z\1޾'MƤeH.hz{sTǨ1)og*FQ|ǾkC3sG8@ٝwuQREVG O(˳%NB> B~㦊;ߗG#A;eHq~ PjNͷz @;x@L" {aFwcj_950_u)g,0hKǪ-&uyO.[˵[?kݣWz6Mӂw_1Mw'CKAU˦-o;#e[Go߰ThǽJx$P{; ܎׋\obi&iq9I?\Si1 0&Ԯ.l(_SY~C:pucÅwhMkqlnOg"0Y݆˥?Q1>.E%w._Al@dr ʭ@<սuc.K.+kr1M|y^u0 ;yv/Z=9yNS'_hKS~?-8<~h -u@ڿ\H @شȈ{!_7O~[964/z8w7! gA&[/ieX?B{#p?Ű~hV> sjOZs=kjcw~{X6~{޶ _^&vn N;1WU&!! .sl 1>=| .8~~2~ 3]rBLu>^ `D= .ۄ(~ a}Q읷ZW֕ᝈ^dh( KG5}j @tO;qv{CKGH㖝,[s ĝocAkgH iS'[4!hl]وj髠\X'$[ +waw{*u90 vΒ!?81D([Aw0<+6 ̕yqN 5[@^/ԭh"qYڰ=㮝aǚtBP`!a]\4Pa8jk2qb qdDI9DZQ ѰiL9N <ԻC J1ZAJK% DsΆ&2VS44 M0/ ܷ/.%A]mGZQiXrȈ}I 9W<` ?H'GZ4Uf?S5J0 <2#͛9 UJ V% i&$uP|u; džI7¨RU82Ql2ȉ1);§O) ėҁ% ̌(Z溲)Z̰y-ëw_c@>Z!TYIyD|kjb *-`9 H0%+c)h'vߩ1=B܅ٖ܀J:=w҉)e$SAQbY9dԫrJ%#3\ Ȼsi9Xӄ=ȿIiD-s>?ӚĂOd-V 5l<+ke%L\R1 q/E5%DiWs+y@ }x4@AnW#~BM;t״}1?ĊL*JbכU>ATIFu%U%|c]N9%')D*iTh5uImG6sl{%nAV"am)(|/_9Np*&Θ#Չ|O+>b`2(x2/'9#ZS@1 :pC}r!΅XqC,v rH&kO'\xAS" wwv @WECZȋ,L#KԎ;a3i:,PepΟ# . L׀JܷG!gM]|ocD%/Up#qּd!("ek٨ΐDͦZMgu(wBEe3t, JYPu-+׍c@o>5,MsOoC- H+ZiP\XmhTRE\F^^5EzKeV2SP5 C(bM! W̔K6e`ZMVU^j |[FZ?5y@5PUi *j9zp5.^. jj'(mJ2?& x C~ff2,hڈwNƩ!h"ESl²Y!Om埸0I6ꍉ=o#S,35hpAohFcP.j$f`R# U+]yǘ~F0RwUzsz@jOc~rm~Nl蚺x2JAvM%OپԺiD)2>Lv~9TM7cLK^IS63|d-Bf9(r5)6` 3x_˚xv1x, -y߽q@t>РO)V /)]R; W:_]j(k=FEv!.~щbn!;s\ZQOt~>#]uRڱiGP?|b qaR "|(bgJ!gF)892O^Ϗnw*bcD%-ղ 42q~>7+gm!GqjUwL!`n4"jL2ol՘3\{JMm;DMk4bCh- ԘR);NHzF>k%.>Up1&~Ϥ 0c\ovQ>[=Me] Ͳ/6Կ<]u7_~|UF5R \A iHk!0<dځd 7dx\ۖYf; {4M@Cdžqde ごcgΊ{j^CWbE@/C4񜣣z_G_SJfn: ailz좖1؀zxv2SocVbRQP?To_ONӬwۈw`ib8˕c4VI5. D!PզcF H$Zi zq A\ 0qu`Dk?'Cb $`\y_9ۗb )9 WzD6X|u@T j@z2^h(_ !m Pz߶R(װl75c? .S2ND(nN( VxbZsd'&'&PR/yO)B!_8}td?GQb@!]r>EpvX y_҂%| Ikv1Хn(ybD#">)S?_7xa5Fs ?/G K-&%)"M@HwJzkm ;jY(v||DT7CG%SlsSyP~sw*?R զ`Ԃ h[W~A0bobvg=rr BW֎hJ3SVᑰf ~ ~2tMp|r7s !{U|3{,{߿մ-_ן }JWpXy {dhCoߦUK6p יc5k4,;i~7ɸC8h%]$u\ ,=k= z7yͻ~m7/'-]qKauw,6oܚp\]v tɸc2e1tVthn,=(PWg~F0ր|)oUX*5HCkք8 6KJV)xgqWj(Bԃp_DՑ10-*Tr8?ߍF|r_7dJŎnB5 㓝%$ 2aK&]dwRrK*G|{} f`Ϳا #3Tn.4Ԑ ]Ѣk@9*a~N#e"/P5vZA 0ѷ?wQMzԛ%چ7߾y\n*Ƞ Fâ@j2fK) `a]LQ yPDi6P~2fDgeS7~zLPgo|˿e tt`$/L=ӹZÝ Aӗ<{:ɹ Bn[>rh_5?&s$$\ GqגibJ@CWIiwuR_$( dOR N222cb}R&S}%l ua:K`R}˄йp*[k$?Ъʈ&W.[!P<>)~L~ ̾kpy5qp-gc|wjqCI&5׻AʌhI'֗yz=^3rs 24+"roTe()VRl.}%fNz$`RS9 ñŪ;{ʴz))@/1K['!9lFcc;t)/>PXL&vz3IsN8ʕ8A!*Ha)VB%^՟zw*28Yє$< |9O/&PgGh!gd tFc)Et}J.$.K)P/\ \)ĩm G W"Yv8""j UƐXɝE1U֪l D<[SByFCi^m@.H0 dIGpr(1\.tp_"4ʯ6rL> BzaQS&E{l/9xNך:{ H0zk&C@@S-զԔvWE Ol UL#; *+|╰-.Pa p 7hg/ h->JJ7{QU9W]MsV*I:Dp,쇫{}EM˨q`5-85:) a‰m'y2GJM!Nh3>8j$e4ƀ8χ¢)+C\@fEhR;q`8&T̢ 8xx@;mmI@+6?},E[118U6AGE%=1_gӏSKh_=,"g x](K8EV Pm !&+@ƒQ)-`RZR95t ßtI$V%{~/ONSOt iI'Mk@6 tۂtq:(o'JT囓scXWa @,f#`xb(1;D.{qhҏFN/p;41?+tSOO!gJg 4``2VXTVAضbvc%wTƁR*[NєҞzg qItEᡚO3p4{$az~6!>ӏ_DKz~+xtZOșPGq=(H^"ΈCVpsc -v|yz|HM*HoUBRM!3`B[bq/9*, :1\59-sZ qr W=#Gտ[L\eFٳ2ʄeFPZ0T keJ8D_"wsyxHi$S~0|zƩ_F_={'t-Ċ{+퐺ɔb5mbͰ91G_,I2}a[ZC 63Tr;I$ "\RIAHUMiBKC( H#%l,% +/91\;V7Bn2յsrp-]1ܡy* >u+'=F~C\{=t\ XCt>:_p Ri#~泱VSZo H ~X3ۧ@Fą2e"Ĝ ʃ u5(Q qWF/G|"(N>Uale@:B*2H*) ۟V(minY\!@x 9lE+0Aa Ϟ⟏OJ]|l0ROI ;x R+UV .aA b%RFPw{̝ޗ(x<:).Ns/I’rBqe˜蟵H1A/*~C,|V!TF*#ϗ`ho?nz3J7jw0z(hRVczpDC={pTvl[H9" Cd0A:'casPm>腣FRY m6\ҳec=P~I|I,ڳ}}אrI/tsJ쉏c\SCW`ŨD$9FKˉurydb=-R 0Z/P .TȘ9_mjJQw]\I4vҾHE0kew͉r2r {T[‡xh}ZN`fF2b– EᴂJvurQO] gHas Aj(pȲUwыrkgC5YVzmfumu3!h,EQKBw>^ ɤ^?):zg^N|`QYfEzmHgj{_ܗ _ !]6HX`@6ٶbYR"}-Y,u-M0ctTh%ݭۆ[=o&sKqX!o1,4$N#+ ^ZS%g2Id/T3D d=GZX`V9aU*9nxS'0.dWkg+!導 &A  c81RP:dGr3]$#uS@q4^_s02 c 5yTO7=vc#mvF Jso(Q YH>'ȑT23naru,mDc%00%A(qBqH&jU-BZM1j&N&x#r\DYhe`2`^^^p'c .B'U[jZYYPhQZ-ΡW$Cyh3(f=69SIp+!SSiO{wG&b${ 5"f鉯!( ˷'/"i{P7TϥdBt<>hyр +WK R Pq>xz\T:2 mfQ3 ӜV/ʿ8YiڧN αCFapiiAIqP[6!`¨"&G*XMrN ̒ r✰SQFFYpX 7_ -mR3cR/ GY~* ;M'I瘀°'-]R6:d'C {eop,Q7T]!08"?8/˟tpB=F#cRtLu5[`ԇ͐62 #Iƒ)Il2 a°vLN5G3bFh8\5NC,GeͩqGG`j,8␩b=TڻJ,CYԒ/MMq:\V5e!QrL jq;K&n<'#FHl~Dv QZ!P2kw+2e p=x(Nmy ES!qp>nfCFCDciĨ .DX >54B;F@UUowsJ@dhyOJА=. Ȑ| M̆ K:]1B2 {{diO4,jăYKIR?÷i*I< cc7rlJUOGv#^;ea!W*(S(x8!ôs)2 %,$0uJq}fӰjڿ'$*C)U~IpU~[uAWF"Cokm$'HRBfHj^g~1$O`€\&8dxY u7#|d8Z2 /R0}&ݼf2ʂpj4tݣOt( *%ODžPYc9a(QFmd'0պGZR75ͤ8 <2 yiT!08| A2CGד\)*'Q2}!08*Ow57X:,Xmo€|GL8,(e1Wښ11!c 篵Wu=DVƟ*N>'82ĜQZnaLߊo]}X~񦩟d):d\HJFQqb.FRXw҆ډcv zKLJ=*HyzD6:ddp NvN#ؑ6܀(1L@`JXPa !?3$וY9QQyrNehT UJQwX[UkLe&C|>L>,}.37}5va02I1 |wvq=Fz]B Ó M8[f>oy·I l1[5a+Tl_Ta sMKBԷÛuc SzxrFFQp4I=ѤzDit>eۢHa- 3t!mGORO 8&02:?PrgP܄aZga6 %/u|PZ#My D ×t7,OB[r|¸d8If8<g;1nAE-dCU[hB{rw((f(XQ{<sV{$g~#08η<ň>%la7R E7y!08\_H>)q&]涺qjpg'.)8$0j|]Mݳrh=:|aT("7 tH/OZL7O{QX@2ʂC@;#ab5,:Tbm%Z#vAv{lΔXs+QE 0"F@u0}(Uba,RϪ9r ;q }^;dƏLZÿH5va08ϓ^.rpZ`Dimc6 '`fR+(.i6$%8% 9+' !/ҮAd\D_^m2M>Ƕύ_DS4|eaX69Ҟ|mc*WwĄ٤C&`qқ dqKy ;$[JάMU+%g¸|# jc"%jW85i{CFap%*taSɓ: z*f_Ĕ$wfS43^2s{BER1rSdbdɛn @ [ ^̝n6p09-#4Tb'Da gΏ_OסvV( +%hb3&SC$C)9)sAoofsw3w0[nl߽~ xje* Mfoꭠmg9??>W+o?5uމYi`2 ,Hs=H ͗p>,@hO5a+6Xw]4M;r- _Ma@;?v7_$຾leN;ߏ^SD}_ _W;Y훽Qg4nj|5]!6p5> \ΦG zlQ'q?I8A[X];XUra;/+1l/}&pp.@M|ῗ7 yiMH`0f.(N%֞WG:^W_l/gF|dˤnn|vxido>r=vnn=~vsz ڣm٪z~Xr]߯@I6T rNrTԝɨ-*EW Ad6DŽŸ̿}{0ZqGyj h t4ջw+`_ ٪yO2j>USpY U(얪bnMԨi4̍{~L(iy[~>a?˹:Q kױ1j#ETs@=\ }KF.rQn47Ooc;j'4ߟ%.2 7ͮȾ#wW_.>@fuc_a#<֚JF~ 6,WfMټ4Dģt=KMYcUEIr7kc|U1^ ;4P(Q V|vUʫT2gG峣IG啞Sؿ/d#Af_ڗ;nj4q#/Q-ˇwfџXo 7#-Y  5>=Ӳ֝ޤC'ix,][y:p-,Aq g *N 37CwOsKfJ*JJ%p3έ!90^3g'|qI{)l)z\|7;.3wavbL21*"T_ z6Zکw4.uLy;:︝|#;l:yOW#)ۡHR^~A+xAfk75ǎJBjUPKx7ߵ z5t&{g?Wh8Y1 |SW$ٴ2["ecӵ] sBmYmrk,v]ZݟSi>T1CLJyRYbkr SkQ[OfVƼtUfi5>Yt+_]yoG*c=R߇o d/1Z&BK ){,_Q c %wK@ .W85ezKr7`-w֩+2~H:N r..~QV7&9zٲjeflq>SZt#+6] !Mu[ >t0C]BpחhI^'8,ջ.ߎr;⮴KmNEfrIpQ;:v<9E2 gJb4Tݓ1z2ݯ7ރ>0VM}lAKxOx;ަ#W{rV\Yw7\>oa+m&5 Pf5#ֺBg-0IPYhYh.ڪ7x[;aKRz3ֿ1H),n/lS,5a>%4%ОtCeqؾŲzgogYhK巳[/x.Uw©g3'hFn\%i_:Ƶrǧտb^Ah2J2}B B17@9Iō ]v,EWP<?UH[2܆P}m0[]>n-4S!Lڲuzs 9^jVU/ݝ5#o< )U ZZ͛Y\մ洦eVVUh0pnq#=%-.|Y/.ppfR y*9L*)u^T*sW[)Gg%R=M8O_/ۏ[,K1T)X;&,4Rhu>^{=+3\6D;yztۥr5qT[lK=_׵FĄ'U#-zU[cjRW3r=gois"G@QK'c;jWxa= DǀrΥ ܺIҗ(UYd% 5Ƅ̼i QΞ(g9ٸ'a mϛ.A[JN>~:= dOh 0$N2X|UgB'\qOKkԚ(c$rp{cCl\&VS<̟hIq=Og=o45RV@#׮` O?n=O>OLHOLC_q0ލ? ×v]pɆrmD6ŐgA\Yjc0df36F;N̂QFx QbOn$x>s!^A39U LĴ@6L7'ƂHH!).hN |<6 !ϊ.QX[څESoJdД;% hMC^2OqPYF2.r&fEAl pd Ф0$:$=k*rvm" _AS V\kYkmh2CR9Ic@#0N-rʤ\z]/Fj}S;VDZO:gRUo'48ZzYM0ד&UWe"v|5(#+BϪf. _ZkYե,B5O}YXJOko5`m!ϧ+/ouxS%?:BOwKխJ5βZҳf䵴B^C N*T@QS8<$}ë%lsֳ]ݑBwXDatrX}rֽl2~phڂIk=fӭ\|94w_5c٩л4}`ɩz050icCWx^l-g:gGJ:pN?s8Gр ă[*2Ғ̌Ռh@8$(ƍфDh羵]4;u#><$hff*sL:t1UעF[]4뻺!ߥ,sgRR mmf H"Y)4f3SZIRq?Oў1T&QoI|o%fE%"k(ξ6FH,H1&0bm LnM.FGץt]GץtO#u./M+y qT !FS&RZStԚr3%kS)β`ongǫ&}e=/겾1l,/7@aM=:ǿyoY$);P:gԸFr5Cq % }Y<5hk;{rJ9`:-ȽfRs} @R`|BiJHE:Y]{EuOHK)mSXa.fˏ=@oX1c8@,kV`2Υ ֞%YfAUY !#zj1!35cZi3V2F2#Nu$>ے;F`{HU dO|z$R.SgLBrx?_..Fw?8җ2^_7G <62 2o&oNRX} &S(Jh&q0wv s*r?%q/ mh4M6IABRB3! ^F J6H-.DN Oq`9"dd?)I4U:pUu˓uk!=eZTH*M9;ޓ<=8 ;9*)J/ 6, ;͠ RVH")`ôR>pz5r㦗|t:*:*eY%}`kj@o$xq$@+*Nm&!'Ϯl6mdE+4#L YOs߼:ڵ|h߮p u3;rڣB҆-vRx3h^3{0 p>]|IY}򔷐f7J/f \GiH3I8    <cX`q,N 8aq,N 0q`q,N Pְ8'x@F/'X`q,NHHi".쩈=VaOE쩈=RcJbOETS{*bOE쩈="T|POav1&9"YT \s5%0M zmO?,@) "Y6E4 q'^TLJGs*EJ!4'pRijk6 lsR83+0 BDD,e@%Qx]*ոUi֒-UuƂYZYM!).hXѵ`ri85\Q4ޅ 32"Ƞ)wJКHe8^)(/ds^|JCAW6]Yo1k]&:z1R.<,cLDH5U;Y)98']zYM0דZ ;qeX+RaWgJ{9_)w\TGl/0mBjnS$MR}xYdIQbQWd1/TH{mP |(N_gO͚%XU5;N(LZ*'hYq2UɇPTK߽M]JQ 팤t4L+YCX.qm@o.^roIN䅺b!y!q]|g~"J h22.@O߫ډ<XS'I4Iugټn{_LݨgL}:nZVj~:/814u!II*1Q܏Pu.S1IONr/`! H ˥CR-U` {|&JIOgd04$I"?ղv;xڟ$Y㩙ώ~|5}+ޛܒsjMvWwNr{f"+[I=rLe8Kl'u}f#˒@a [v@rD99̕ l GTcnE$zxG MDNDl}EsSc2Qf?!X-'ͅI;aBX#1LD0eP%!$LFXۚDĊI7l?W刖[wLu? ɴ#v@lj6ѓ9%pg2DO\ ,qi/%h8%b΂wDðq-'s@<@0x7e6~Dꭨrm.×=!M~.粴g{OYO3dX~__* DJ뷶z+tn}߃.51\gwKEppfzw>qDI ߏ?\G+#ݸ9!͗|믿~QT^XeD4y$OCL3ںg4jWQв ~ЙA>#, 0O e4>5(%q@sӅ:+N t誟mԁz;پь8^Cr/N^AL :l[4MVPۻ:~ޅ?Dh9p8=||rHArdy-qAr,d'|o+ߝX(@,dKk,)QV8dn!XBtAU=y 0r 8S#6tdXDcZ)"V?"D4¨uA{ikwvv1ۅ<;| o؛= cXa-Cw5Q*5J3V4@1#է?T[> 蔣J`32j)B:DIcTRBP!-'2N)hCQ{0rHR5Sgp؀j*Π@@Qj)Anxy9T1#XipRsO|=ARR逢Z/W'Xkݢ;{GxO"aL˨+64֛RZTs_ΤrCxjSu:~+_蓰~:p/.M1Sgǖ:n9O6 ^6R(hY x;0Q 0d)1 jiv4n͆'֦7-p2PGE iTıA /pDkSpW{ŻSW[磵d'&||Nk<-G|<-:# 1/rev~ޕK) a%~4&׼ xENN饻llWQeM(&L,VR C~_M]tN\ wp/ :14 `'sH[]FOfʠjYfmxl[q]^wbimHn[Uu͕Gk<|s}kP&x#?/ fojQQuC4>^ND CIi$3 E҆1QI' )x̑KU ?L^v|SWX𝟃=0oK_c 掠r/nqhq=RKP![xڭ[1q᮸/dtcʾ #E/vA!JQt hV΀V!=wjv)&uټ1p5PZӳ4UfjƱ;IwPVlIJ/ZYhCEe-I糋+5A%ْPuބ/br#V.] l^C+Fl$C0>7B,:yj-9N٧SHA\[d"] {q.n+_AXRAئR 8kj0ǻts #8L0+QCW%Z(/tJÃ&yn]#tiHK^ڲnȺ&g-[鵈4Jl D c\-zNt37ӥL,EGK!3jg>sWL#,8ZI}!.qqm})0jߝ߲džfö:lރ@=G}3hm(㯩GI1~Q,!b| %”6z[bd֨XECDD4Ʀu{}RYZG^o6hH p,$^h"לq'# ْ6GZcJϘ+mgNxqeZV(`atK `aS:AT#5 Q0H *T #)S*"2'2,* _+(B 0Cn+qyP쏔>O/[M,D@k*"iz5:'Cj TVDe.)sE GYNrH E $ G :8]+1$v!5w`RH"sT*],#&a!j$+u&e$g\GUa wsg/N &I*&Ib ]߯KGbQh:̾2ϊ6](>~:{j,1@uB9f"Uax>ěh%ɻ$**T_V_:6uM*EܵD|4:A7IRS`{X.qm@B%ګ^W-"X=X~H8.35]z()ȸ =-~~j'n:Y駐`:yOoMN@M68t 3bFo.>w iN O5W>f"v.$Ü2 ̽tFs!L¥L]vB_A@wD-6`M@ZX.*li¼ns3Q:Oz" j,ah_d4SIG&o<毖E)$O|vծ5_W XHVG-_:ִM .ۋth7\(HJgrFe8KltIeIW[{0o^8-GT_{ C"珷2wWZБDRAy2]$iy8VOAIΚӏ@I2 V V+O+ɕJ/< 6>͞.i'X_'^qͱsi>߻x[-Ջ]:/Qx5ͺU`W m55%M5]nlF, X> YGQL*p8dQg79Z ]뼓jWI uH8aoc '%wV kz,JQ Hp[/~7_MoޜcOqz`|}#0gӝM7_52ת鏷Z6U57*G 5ـS/m[꽾|u_yQzVxݛTٜ/G),ؗŮLN? 4e؛XгVqF }7  se Qg<s#(AS`QfDJIDh@v>빺(ic[ϭ{)Yݧw cH @8%ZvMAE!Iao\G"2(VBЀfB&az5:P|;^"½SV;$@yܡt}u*$i$ Y`xo6os6T@*Jm6os6POХ~(a11o wfO>q?jE L.3qKGTRJm (LXaL[X#YGȋ0-?zMh'vrgf2{}\:6Vܺwm;sAC9/hsv{׵ˍ:Ss0A'/ֿu(UMwt̅so+юsX#31`[WjܸN=9)A8;d 7j}hj=ɣHkTl:F Qon&Ft[S>eZgB޳ކd6=6Ae|W{ z٧=HB\a}87o<+6|٧g& ]tq _x->s>nMQ"OfhR${g҇/K{5R@i<1 '4x48+{hd-S>Zp ;O!`B^1BHP5M$]Wwzc/+%Ou1]0ӫqʚ~ΤM.ՙJH\#-ns~Se{y=i <-(re=Aȧ5w4L9DŽRnl1s+:2,"`GsGwHef6^,X 쇩€]d~Sur躤4dD/[R.k%"RHD#yqh":7/MDzhuqIf̍9ۼt{ `v"׋_\e93wޙ;fL93wV93wޙ;sxg93wޙ;sxg93wޙ;sxg93w}ʂ1Z#u9B! ࠰B`D$-hsx&7VW%zC,Co6G#8"2A)^k3LD#EAJe(#x$ALrIX2@{&CDtۘ)€DpCGXZYh64 iktr'!{ͣj$_?ʽy8/\?|7uG_p2PGE⡧RAkS#`6ݚ j *QK 5&.*W:YIhWz^@}bĹYG spG{!Wgw!VŲ7D6B!%*f|hi8b1 vGiC'aArTilFF-%W(RǕ;i|J*\J#QƩ&STgzE !IeHMw;Q!Wp`:-RLpJpgz *z2޹ooT* KB\i#Hq#*O*$".bp͍w)$"`'rtsi$[۱G<gF[,\C RFiZ'Na!>tVHgߒ@7RG04U*Xb)EE"JFc&=ju.GjzO?!ϙTƬNXbgV%VF+cL1'hZ#c-X6긦[]:` S}/t(4}kn #N!}JEZTͿWBí%pqyPjYγbHπn_m!׍v+nHBHK1 _P F_~0X84W!!i>霡 )C{ ,033vuΌT] )~I97=HX53FaL dhV|g1Uqai\^zήܛp?OsYCOUWVW9o0TᏪի%KԂ|i`J"2j(\ X-Uwx<⫟Woug#Jy(nta{62f6e/C>$t0|(0Ԛtj7CkEw@,u?rmWWr9hN倹m:KZ-]ޥx>Dܴ;>{;2-cT? 1~ܑ# IVemI^Ob16| c%L]-܊j[G VoK:8ٲXFH6ma&SCfG>Ӭ=Ԧ~b3a{aXiSZq:c{i߅Yva)MY#H;4b7|{5^yQFY]/=C,Ll5Ȅ.?+r4z|tA [JW(xC38}?XRg^גeB0uXh^p;w4:Ԕz~;iƚFsѬ&.wáMe3^Hx;DhQvb{ -VY:78Bu3+Ym,wqS"HŘf z iUGz^*q$E7# 9v^!N9)g;l픳rSvNg;)Jd> 2X|bO,e>'CJ7_6*[|Pǰv8ᢀ-VSpǭr>(oEum0e??KQ3`d/][/^rX`B<UX-zg՘IDk1hNWCCT@ oeI/)7&NCib<[@k4H+_"Lt=ɄQPr=r'x2/ۄY;8jf)z_tS_ \L80L x|txx0ىF;{|6ɔP,~ǦwO7L3Ɠ!8;(cYв obj^h]Ve疥Xa%84hr?ku+ȸϲ~!SLxfΌޙ;3zgFy3wfΖ3pfΌޙ;3zg[ˌޙG8 oP5y%O?l!rFBB,ZXJL!0:`oXܲܚ60 0R(h@ x;`gF2PFxGB$'C,Yydc€? 11NXYh64|(pN+X AԴ>t~E2&Bzqމ6o=|uvF|MjkI. Q.eR4 X*c6(J8D"1Gpm~U *o=]`,ߦt@~M^ fb.V\ȿQK ,s钙+8&x<ၱXha ƙARL:8F"14Vr (Q >Fb4.<i2KFtUh%K2]L%lOBw0On]iM 2\ù`\gNJt2l$aE$!"_hY"4ag>8F5/G/2I=e{'Hi3[iTTSyk xVpt7-[Ƣc\h7ł9 ˭ʼnuXTT@v_z+g X:u r B($S1f4j|sWL#,8ZI}Yk;@\OI[?ލuUUf m;|zzh|lяfץ@RV&mץ#aL+60֛BZ$2ٻ 4+*ǣ5x ~ ~ ȷ7}8ފ7(kÍ */*כ%`trx(p"[}uOUY V?穾giRjb!l}`}7 ϦZ&V|]xn/K&~%Rd{ά3JJX'." pwX>%.vMH2 [5̒b0iHVl4OsȆ ^-ʧG=KFRdA~Jp_Cתfg~H#ԥa6+":!BU}4xa`FB+ {jj:oHNhv4ȃvcof jId>˄ ڝL ݵ(m FK$ętt.e ]6*85rXFfK!:]Xv|DkKSk8 7kHЛw+{L^7+"p;0M㩝{{cJVɒfZ]`BpkS+n}|Ǯ–]M_ZU~iD4xQ=8|~] i /֣$1n>pz;cos=,hG{zBp>hjb"lҠT\ EˎPv>JKe"l%ېVn,I:gnz7m8XACqq*WlJz[uSADH5ayZuxvRn=:@ m߁>ީSrffn>,p8~nRFE .~*RaQeY_ Zkߪwة.֯   C]z]uVSԂ1lgXzx͛>i2!u-z%zc^WKex`0ZZ@3fC3܆1-.Vq⯕=Y>Zh2I}]@=9 HWg$_2{h!wgQb(xQkmxB1>XЋA>J!ԃ;| ?Zr;=\A֓XڹSM_0=@ / 7.!nv$ֿ0gk6"2 WmU[ )n#+U;}Z3Ow Kx>203/$Я{jR{crG`!':ZBbpt^m4q'# hS> NM)0qZ(K)µDX?l>ypƆsw[Lzf4;WZO(zW_X$U6:toeEOF\tK 2x)gQ稡H&zçjs@n0 jV*#)TDUOSJb,"Cd r^jBކS%Xƺauԟ_$ .XY#8USH%Wcp( TĬQٷTǐD R< I@.@-6$Tkm,F D!5kH!2"RH"sT*]Y"#&a!jK l-#W-6#q? KzJ-N? ES0Ria`9.kX㾤4r=|\.dJ i`^M]?Oͮb78,[up3wf8_77SM}7 n @4Trz5cV5OJ,R +pR ha]JA)f]_.߭#B\~?0Gދ }ɗB?W,b28g/W_08~ ,G%3oh )+磝}h]ʧv[Op"w ^8Y{tAn كk';3Tм.~d-gh>h*xk`Pߥo%s"a 511 s[ C0*Y}G'rc~l̏;FO7"}l<>a2O?U)=׉S#9d#kmvB@%8a7q(0!n6[A?++>6u=}vt]6.<wR+aEaj! #A;oq?ŧ+O'x উb}݀)H6@d3?v<(hww<"V T]n7@G<ٟJRĿ} !4b`sEQr85&vx?Ipv2rk"Tb}M G 2$ uG.G?|I~4[[_kWen <Y{k_Q|]yۣړ~|UE,4jbŧq: ]}S-dZ\uG:/F߁>}Ynaz~ l `~' 9h3Q#5"ʭ8kL ,s+0b9ĭؐŹh1xʹ:5gqvC2dp'υA)"ٚ= " -WȵE3`;pNj`$3zŗѡ9N⿂$NMTM8?^z %),HəAqxLhM,(pULI-jXzMKH=?tY)0ZT.lw$izVWD"h:᭶&H\Ҟ-QR%c[Bwژ z3Khw =/F/UFW6_׵6d]̯ߙGa<]őXp@ `[cPa_?wp٬} `Sy#EDC$Z'Snd[a#c/|L^(s7 rAdVYmȷC(<8*VG9H[HBe5 # GDwPy)TiuB"Ռxd(21i)#B 䕑yJp4%8Np8)Q1moAkE#OA1EHH*r|4 1.x XF[}'iQy '"JGyr$3<Bj.J O`~^Erp=KY {=K[~)f٢굂nw[t$dCye~2ٯJɓ?(!ǑClAsS3"inUH!jhw;iE`Zs"N)} 9{ Z|?*gU=>܅*ùHMj#t+zP!Al :%nU ;K6ѵr8vmz_`gf)M gPY>@1s2K$xXDc^6`+ <ہ)4, w:wS,*PsoHps[؃g-R$rֱ"3`,5 ΈIR)+ keDwrŬ"%tlOigM;kZ6h A) Ds`m]vU#vٗt.hI"!A6:8Xr@(Q@3Bȓq#%()iK B R4֑~.W[\'PKn4 B ;"Y$ Z@(b F3z38X/ndD.21YXҜ\J&AJn LQep'ʄ4Z.VϜTԈc|ƄuPyʩ.g86pP7 D v%j) &% 9a3eT6!@^jR\(gaayHHD  HwPyEp't/:Kۣs)RΥhzͣ ۶ >x9dy DP 35!s&bsQq u`s1u&7~dc2ajle67kmBvGy ,,Kr'9|/ hhlwsc}͛_'O?L!{5ѬK{_>7m_y@ejFɒ8FY4{U?A8rhB\K'8[O ;[[BuPŸԆvZyă N%uB ffBY(uLL )G"mUҘ;@(<Æ:)"@H%T-DBQ67 9Rf!K@(<+7LQH`8lU̙^! 'F .J /q pKa|3 Xm ,2\ @^R^I0VJ /-%\`1b;@( K٣xnGTwL#N}lc/-D!j14/IF_S)?MQ ^cEzKrFW'EÇX4VVhloSb}:$Yl2 8lY J ֑/}'+tZ~Wx {k7 EQ]+atL`RcX3:w j%B\sp?pM$OӼ~zQVdkwd9?฼ۙ"O s$>DD?6’3ii[Y=|ʔ%Qe @ł՜N]5K $xas; x'1d-^!k/j:Ƚa=Zjx}y_ VZF皉[-䮿ug[K Dee|`-tOsV$-Lm!gYNkb3 i A(x.ZT7 /(Csnnkv'[/.ruo>y fy'o2$siZt.q9cɃN m '1FvnmEgv2@2hC'zц//[YA Ӡdn`x=OF;3O(_y}PX^Lsӧ \b0_{NKf`ͿX %1J)%: *]agvRsŰK c&3xozfn@k"zDŽ'WZsSQMzNN9e8lA'gcS >¨tZvlS}-]O~V?VDՉA{47 Fg~:~g.|5k!`cLz[cnފEKo5ϛk %39]w> 8ijeTgWZkD_mExt`u՞5^+L;:{;[XNG+eu z]%;u8bga]4fo,ˮUUd6 Wjd5e Cu|/]W\a yn즏C“cĹݱɉm~ZO*t5=H1H!d_ y`&]ԁWl~" rݻhJwRA1+Eɻs])3u0}{vcq|wc3i{?;p{w摄FcEe݋;v^bvޝLfAwek!Bнf9{{@VvUww"h*T?l (BKQ5gXP Q0r rڀ+(&Ks A1Wsåx4>>а@fC cyVxj y)!!؜iCrca522.P|!z΢) w .%12`ZaJ:=l7@hyMVnv3*Е}ȖXKd ҹP 輈9"w4!ZaA%qMΕ)`rLh=B KJPWd@;([Ux$m7D*ǠVQZ͔#DpBZd:\c20Qw#6| SPJp G| {3KɃ\da:2p| _oހLyP^q->}>+אgG"O}VQ>5zck0U:{ٸ^Ќ]N#%#| ʝ9Zx+B#3mx= [_ɱx{IB}<A@fî70g/& xKd$vu,.AM- xڷv^yj\ӴFפHG0үƍR ٰ8+L>-dӗjVsl>VFZ(nIMa|"o]^*U4Xڟ*yjveѱ?v=TIٹ!\ #z!Uxd!R&R0?+RDk|ɼN}#qE|5ӿ*Sض^2~kJ%[BKzC>7]>?zڭO՜+tWW"]x I )c -ik 0]n6AŨ|z$ΆïÙ5Ee+^8aߌ.K7^oʬ. ̊.䦓G)7XHǞ )X{)) JnHb΂wDðq IMC<o/'mVL-OۍTN.\w'[$N0N07"NB gP'7xy05֫ LIʤ6 n;FX6t\C7A4wZix3c+еHJ*iqSZ?gsJ Lu.q0;f(Ыv\x.`2_v9YV*b ACf E}3Q*x.+Q-| hb>_{d|\BYd/A#clc"{-M%;Ǘ'h T0^e c8}}@Kٹedd0Uͳ0%~P'N_eW#QeS^:r;5[=H&dg g*%7́*$JQ *p5(§YV-2cܺ`R濷].tyL9&r"VFudXD -J))$\{^ kc WNn v"tCVS ~W!/\k e"eQll]jm~+rev<ưDzz_Sh[ Ms_ƳM~>LL+eGɽ_ގ?֫LTTVKPjdE17(~2V Q7E`s(W\DɈ4f7xo^ yvhے|s.si8r͚F9+DU\P#+JKĝ4>F% .Rr(T)Rj5P0TU/Tj`ESQcY NV'?tﯸ,9hv,{Zg7]]X2Ma6PٽA}T[[5&.YQYHa712&q CknKq`$;Ѫ ap)IaLv_džVFnoח5*Cz YRz\TkR%l}lo}9Ǫk}m-]fcyZiAD-䑷X$}OnʘjۆKycGXΰѹ {չqe>EkRI UnozqSZ0vr1#آėi+9 M68]k{~74{kO:y נ ;+8pGO셗}MtV'"T`ԄϪ&,r=k:IYR 㳪-f[id`~VpIMnra{(d<=b&%8E;Q)瑙3Fos"mJ|[C[% w`z|/[' PքNtN38:/65KFd+l|T¿*I5,Ds VEjE.&0 ҁzqF9j(<2 a6*ͽԮf՗TxG$xRrH3,* Ÿ^ދ%HD0 AP$Bmgm屪 iIڃ !O& AS"kG5JHD(^ Zq-caW̿bgT>*vD% pQ$X1Y&tԮ 5,X7"jOdD4 1*FTRr#GL(„C\g { {hSgXs=LFq3w᫴|V撆,,b>sV/6H\^M'f\] =-35`5)D(L~g,/&DqV4"a`B Nϥ 06ԤR]d&2^۬̓$S&~] jr8UzH"Pga?Ria@{g5]z()ĸ=T\}*]`AqVs~L-d~hFM??HaEppSVj6bPn2IH%{)&w&…]fs0s0_е0iC(XK UW@ȡ;]i(b<6@ &a?qĔ|\'VYv0,O-|q5}曪+`jirFiYxVɴ$WՃYu,ą˦i܌܏Q!|9wܷ#WfV# ܔzP߸K:[mh7ok[sehOC4*9C$TcDk~nVNgYDuX} Jr1]Ř~Ji@[1[I=$P= fBU:PmpSzKd%{=B =V%*trK`KO?s9^~z>_==ϋ?>}u)_ CU{S¯ n gвagʚ8_AeX=%DŽ#f3>Yᨓ 0nD lK&:2ojf^{i]nxcY}.`Z*;?}o\]D~7,eQ qpK{\_fߕp1 ]c$b@{S~[m+W:՘F( zx" DE*%a/VsYOEQ6Ӿ[;*x Rr'dR®i ((5ÄHZeqJH h'4])ۚޚL,hb~?ia3!/#żӳː$&\Q.X \&Œ;YPCwZy^ߒ QECAP[;: 3 ۣ\BOkQ޺!ڭߍ&E`1&AW btj[<}ͯ(DO7׫ma'hyF>CD^{ˁ@U#XepB8i#^Km@A+}(HFT q{COA݅DhAEHcE11`w lfA5Zv&&-[WkFa[8"g4`$T*,% ?|@ s;G}Pwwc L ),ԀXp ̂ݔPFH0Ԫ'y47g' 6q48٩7[∙qx7$ =m1ҽ1@)}m,10H\tRfMyk%]ս)X<i7m 8-إ\km:Tkr|W<ѮzHy* BC5V0b12X k%UL``ΥDBc`Jvkَq 4G H(`9<ѐ[}sg(..Q#2sCfn !37dfܐ2kJ̼u2sCfn u-sCfnoa;0EN J37dܐ2sCfnȌGZ !*zIqab'M ܷvR 5(VtX/b 傪6z3 VScw6v<ۼ6An9`;g' [gw"GX2[&bDl-e"vrVLĖزg"LĖ΃-3d@h2Mf 4&3М19P>:^l/i~%yɔ?'SdʟL)2O9x+#"t"Yd>g,2E|72,g,2E|MXhZGG~|UNH:ڶq6(8EԊ7Xo &TjU@ǸE\"T+o0t(0^x  (ptV'k;"XWJ+*c-赉IXGKrlG`% }N m@".^_v}D}XW1DS)[w}_qy2n^20pQeN[|Pފ //ٱ\|RΟ. x$ )[0ƌL"^ˈiŭ)J ixrNæ jv[F?飯奊+M*2=_ WT6.}CЌtگou< wEj֛e K햪xIY'Z֟ȍMm(#kI`hsթ[5~ ̀?.&׼ (]?.қotVfK>HWXx>n{<_X'E7o$cO78љ&6M.Ko4/&)a3$vGEgO> ̂HKB9UaaR"ڐeŷhWqYTmy{6B})n@㕏ЭS\8^Ȍ ^`A ƜZ؊!k׎ڵ4yHq 4wzwW >{IE#&hr M+G#Xe{A:FH۩"pFBb%#1))DXҠ8KFbu kش̚5-k155J}򼰸MdD'TҊ(}\xtb)p8L0+&aăs-"ZrV8Afx>?$ 9i=W9l/k!9\a2> wSev:܄9A$S}gҲ̿%Ng0ȥ&/ijn>>7¤;3[12c@-`j'\XFRn\J3:XPz4Hc<)ңQ[Ƣc\VjQi0sH[ 'ߖXr&rU]e`)up;oB!1qV{tHbaJ3:g5J @\פ/'=K>y zQ+|z'z >_BrM9G7}{ hC3ZqW-ޘu$qu(}*ISѠ~5_ܿ.uLhr0zaoˋ䜲au.{*>G7vp2-o{e*)MK3,O]E/%.nŅ,ز7YoRoF1X۳*|9eOcafT]6LR-(=lsRfC39Y3hfV̬S:kh^]XR{? +3E?$Α㧉\r.f ^ \|,K~yPpne_M Lo8}h/|`R 0/0."KP̅Ͽ%|obb?ɠP; cCD?* A'wU<6ҽ \8JfX)[,^#_Z%פs ͫ@Ͱ  '5h(|ץAxSJ%äSr0&ս*:nI}i[ ׻YQ =?@4j ΏeЌҀ{џTtgmڡʉT#Qd:4%ǀ9=ࡹ^SQ'{*3񟧏&t]u,%څX/Z0veջTI`,n=;NF$M^aEPDⅵs/&RW k#t7u%0o:%rh`m|N8 Z-1xu;>xzRjwPߕ"݉ՇC+vy{zIV蘒̰v8$0p +)V9Xo K-I2Ju0CO.*$t&T"(JIӏI WݛN\my zKx=z!ig fn*6 Nuu_A`6>Ɵ_T_+-Sۺ4Oیc]ΏOl`4r>?}3TFoO:Fӽ'wdu=It?Ozz ~ƝY@}5 f0by6ѓg /'|*A:{ɮv$XS'ig?t$=y4&~^77jWֱSB؆N?vA_ç?C?/?~?}f`T>x?h9x=-s=7{4UlG3 4+w<,}mm_z{?i>SY}^&xr==?6ü0kGQq^FJ{ 0ۀ@׾p]#pm!}>J5m|k}tm֯,NzxD5F(Q$J`QDJIDh@Խl}jUvHDzAP1 RrѩdR®i ((5#Lk!*!T>i&K#hmyv94g-coux+XW(eW 0Z%;^ ݮZ6W!Z,DhRH,% "H<{y!BLD,8A,8D4YT2"GB$'(+;҉8v^ f8 a7Ks/I. ԁ!WSskcd"~T! 9kSW;q)ʻXtwn۷:^ ,إ5xc( Cꅊ+[jWq~79ʬHcFPQqx"HpL(\+"fs.%@#S:Lj4+F:Z}g;Fzy`iE;e7ۦ ~tM*0>%o5Ύpr[!)`#j##8(8UԊU7Xo*&TjU@yn8Ӝ7j3bX5C3酗 d*R@VyҊJa%AX zmG3VZZ P8 pRE= ݝ[C?^ b@ALa11gxy$xɎUz%;L?X@R^te&#QHYu2Lw5f,`ZFL(nN1VHKޥB›i/[)cXu HGBð_Ԭ +pu4}Kwn&ϓix`u}& ܚ-]xGӺO)m<ȍCܗPW&]^ p^3uuմ=Lpy-eyvlB ZV}K흷;r:셷Ex'7`U:D¬\oװSHˀlek[)q :)dC/5xk5X`ZNF,B0#BXYL^/1тFQ4`pHտL{M>nۙu io&Oq1VYE 62( F刊`) NPJ29D|s႞ ̂HKB9UaaR"rqh /`"v̩6mh-R܀+)&[)|p<930]q(ڵv-=MRzG掑kzO~Ӡ2?5_Q42aD%u yI"R:"*uX+Ұȅ4*T0u4HP&R@`kCb@=rl4YF^m+:vE:FHT8XG!1z ",iP%LR#1s:C:6-M+vLMh ;Rd'- t[NOtǫ8nSwy%\0s9 α^T"_iY"MƒX{{]3_BrmڣU8ܗwPSRf~m|^( ,Ԇ)Kf+,>l37o$j񒮯n12}vפ#;᎞[12c@-`Rp.(hJ : bHQu<X:&A(]jue,:uu翈fx ,VX1texmT2L)X%,Kyå T3@cĜE _I}YN]1&-jB YߚoRD0j vQ@A=_zpȜ٦KDGъ;*f[Xǽ1t$quɈ?vJh< `I?7_:b`7IV|엫Ra v3čYXK.1iIkąo⛏_ 5k SBjM S^J ֒xN !`pT (ѠBꈈyD4kfjXO<g]pޯ#ݻXz/lJ +WD%8ʎ|L!Ci#+IEŜ T1ffq)mq(ˑʃu DB{E Z ϹO2'{_tEPG]zn`HjXFg' qfA%*Ű[)i aa>"C+v% ЭS\8^Ȍ ^`A ƜZE׮8#Ux}1}J5iM&sAeA2-k@PE#&hX v4j۝(~3α^T"_iY"M:]3̟kڀ1c^iGj )^ wE%KY\W [ _gǡ`FyT7N! RĹsV*`+搮, O-2Se?t.g ZGK!3jg>NLj9+j$qV]I3voͿ7)|c~t\^;m(o _މ\V4;[eDGъ;*f[Xǽ13}+WWU֗I.oBX_M!/e!/E1 a}!/BX_ a}!/BX_J!/'EC<-V(RD# D뀑Ê+JI.+}gӧi#:u"Kh]v&ΊcMqo&#l:+Vx )uu ?Q+iUm !IAfB.RgzH_ie7iŀ+.)iSjjL*2`hWzWl1*ٮ͠ CK&\ւ 4S Be7IonUlJNlmMT91ujzk(PH0z='o`^wƫusX:8w3XȺ ^ 6-P2e2MpjڔL6noJv>)$?@[ WIQ? &uD98L>)Dc-]QHo+7mM[Nn9 k,DnOfضOUTE Mw:qQn@0YǯXE`Eͭ$G!t Xs7i `Q6BZNKJie+ʕ@EH %@)d#xklaF,)Rj<[֑aYqGkZ ?D hl:3RYBGy>YVn|(|CvД}1M `L8R/K9M+P6!&[GfBά0YEsUц60~ԣ/gk=;ǓȪxA@QÉV'kM~YDl31T8zmc7|"J L3i b{݀RDOT9L~ixf_w~.xy3p>Ζ,Fs,g6_;\^sEj77"Ff PLz&Ɨt]5 Fi-~ɒ{)Gwsv;HUJYj`g~H?tiQH=F _^/bPB؊ArN|;s~}{ϯo_^w?_|uq۟_(0.ET.Ao߶Z݆7Z6557*Vc2]O^g^+}(>ph;nf[ C˞ӊӟjeΙ+l23dbUY\r׾L .¿WC+}hqmxE̖Tޏ1(N1hR'S0b` ~Q`, *0wHTr'5yVyY?4_O:̕+Dj̍P\HG=" D*%a/[n߷#SnHD{yq  e-'̂K&BX#1D0eP0fyuT㉍* 5T_|D<>JmzV>lo .ޡ-}5<6"g4`+$T m6Al_AlAloAGBP&qs= a", R*C#! bԶci aq 6aihcZUG7l%%NHYT#@X*c6(pV! 9kS.W5[Ox'{ݱ;즸@4[{`}={/v m7T+Trlqo]86QD6 x6V0b:8sX`R$8&J""fs.%@#S:LjQ!^z K[W${΃ZT#͍r0{O_RlqxFgŁt?m9GΣV,Qz7TsZ,z/xCP۞zRzRnf /A RgEVyҊJa%AX rmG3lȧ[s4t~OC<ܢO+(`{;^^f=A[[!h! &2vtၓ1^m1/όҖ`r'ș3B 4.+昶bˍ/qי^-\:3.PNsVcJK*kc>:E9f,-t9;ZաWuɨYzSH=)1&ܱНɹV9"cA.K ]Cra">"/^r# $g\fΝR mS1rs.`O$8w*uPZ*&߼(:nFtYib~A8wȳ\Q{'g'll Fޝ`Goj&7YƱfzaz颩ӏNq0g|qؿhҼrD￐=m.J+9h?` X@oeK2ﻏaeΤ-V4r7 *^o"TKDҌ"JFVm= $PwW?:Q0#+JKĝg%*XT4B [Ne)RE97 #$aKO,ya f"'^q< E5qQ:E~n@8{%U@&F8uMh:&q9gp͍w)$"`'JC͵xAZI9Iq#`|fip: aFy6cpNGnJku%u*uXERq7K"2%/!]qmtj\&]-<^QqgY=Yϧ,Q ɼ\^4chȊ35_ye")4YSLș&#FEGym1bP{*rѬ#^jڠ}j8J8hGEF;ETt[̯y, C~ޯ\*M(f @1 "sP$xdD#m0|6;grXTxGDJNȢ TE)Z?`6"~?Qadf%b(jne.zhr>)lOEb{?=j(b%e2ٿƩB$; d0-:RLݙÍߟ'oB_ճOx0! H ˥CRU,%Zl@z4x( e3<4x\Y+1g`$o;^\G9`c+'qQnu(Xr:ã3 fxvQp^(UzlĽkaWwꃷulftOPl| bwu]–9lk"@/w fPJyjeH27.SG@> eM>燎&k'j\NٴjY%V'EHZu8ri1%9>ٰtj,bRRkTNVN3׻tן_{Ïҿ￿_KLۿ|p|w¿v_lkiXo4UljYyuO mւ(r7.SonSUIN9Z+m rojbW]h+wSV C BЀRwhȔ?qJi$B6@Ӝ} DLDj@f\2ud4PHZ!'8LnwD{뱆pb04.$z>^F!goe.8V KKX&{z ]Cp.kz];U:NKAFlxf!-,+??m!<9Dh6WH[JL.0"4].UKءCW-aZpD(eRs=Xw=#Lh$(H eD0INPV,Yydc(T4X{))lqJ7Tb΂wDðqmW-im;GOU>1Rhi tv\OBߘ1"\=72ƶ p2PGE*cH54hcd}"~U! 9kS1v[7^Nyu?sC6:fp2D RB)mg;O#sVj LOkge v&q wktsO .T P-Z9~-tY䵟L/S;;ad08^*K#N(H4r2\a]~Oy_\?/W,.uT2}rrx+TͤX<_H:qR8(;Oai5j?ȹEv9WFGix`,(3ROger+gI1 ?縆#EHP{+VRE\*Ja/4Gu0X#NVuǽ?v\De(smMuYXHi.0[OX;|v|TviGz^麈D-b`&yyBͧ!,CS ~l4C=0(X\0z!+Ij_~5V B?Ybx@NoAO<' ;6-gb.yjy?^ AHRVa eZ30{-#c) ih]K ϼe }QʠR\涡0O_eC-&I ]eǓpSDm,=S̙leUa&~֛\qS*oM:Q`d jvFOrIgc6T,&9+l~u}0|+3FLJpf5gA߮\/aboMY0S)/5CC-cՐPҌOqr5jP_HH/i.CA>UxxⰛ*- nr]Iꇔ]WKG) H/kI<'M08*dFhn\>ckfZ`D"ӧ 62( F刊?) NPD2-SpkL+@IAMgϞ-}u!^4:EJ6fyc$ٳowm Q$9s&"#jBٹ|7{i1\4`aTe5T,Y1|Z%=>_v,|>Y>ΕOph70}t?W茒Cx3y>3$5Im 9 <* 8CvY@iI(J1,,VJ\xx=km?hM(֢-brKAWӫ ,3 [r9tԵ%u-nMndppUv/~ƓyFh;B˺L tҡ-oJިS)5,#ޖ'-F<}BmeDT*d^jdy$wvB8O)4Xپ>wۂ=3Z2fPRVt6 SoKù+{ή4?yvJJ˖JJ5eA,ˠq1g4IZƐ{%$ fEPH5rY H#E팕zr9сUT )ii],+M6f:C*C'6L涽ZM.t6ک5.igLt[J"Xß/ .[>[ w9#*9#ZᝣgvSvS}TO3yV~S섍\YX;e= nvd,)!f&b~&ӆAux2ʭBڲD@)[>,l)A%R^+#2j4:%g-Ar@HϒY!gWMLt.!$KkpQ({- F~ ~=\+).0R?{?3U0#IsEkX4"Wi}.>8'9u`ޅOQ!ɭ /̤ٟzrY&UW [kc/? Y8"[m)F[sfK䆱B&2b .Y}sVYX$ 2 Фsd *clYVMVT5ʹYy;u., 0f~ds/?S b`VDRc_c7y4+s:GdpI˕Ө,-}2'&$ٸ̂0!tBݱ# \դ9iH g t W"KGY+D$XbYk fsIuhM*nOt 1BUċ2.s]31E^Xǀi \,q{RV 28Ҝ C%]H>!pH8v$-XJJd{I#I xA'Ah֎QuZ)@ڭ5iQ:$%'Pu^%:0 %Wi:ݍx-4V@5Xmr@43x>y ŮuyviŴGHAzK*4zƩ+`;U^Tɐ$ 8sZǙChOW-K6LY:+P*F;xy xi:xy;4g/aĴ 't>bqP u&02Iiim^\JZe =rRPb&s#alx[MgGo>'`i^`1 ~CWW|>t UG[k0=ߡCF1C5e,O*}6J5L[xGӺOڵ-?y[6))k '\4_5uEլi;g:]y&MoͺCjYQj~Zwv>.1Hz^i?|>ED_:4o\&|>]>29)JMbr9Zݭd0dWdVy:\ڗZ+-UDǽY3Q@(QV,Z Z€ʐcrYFcK\Fj@$2qLef[[ΎػPݎG_@m@~w;>e׋}1gfڡub׳6+8V}D<^xsDFD%)xFeRXn]A O L*YJ:Yε$a*8t6} 0B-U̵LMy ~g:.{n|j髓IO3QY$I[}] 4nPowWŗ P:^pFH]Ap%܂맒2wJ8s{Ap7&F^xz[9=O[r{uE_yPӵB_nE66WaeطV:k=9]ٽ_f oU#&;@ʧ|3}=HDjĶ3V|\e<$Qbr"ӑ6B*\{1 C{Ƿ}uu%o5Zhr{c̒ JV3 $xFIQs- \&*lQ׎Ե5+ʚ ;:L'a~zg *7>ǯs{<@@3,EB6AdB m`XC,lln7DaYiGŁ`*] SL%Kv4ݵ aǬ" lq {bh7Dc9c@r2> $472ag |6hAiQZGi^5(Z y*WYOc/5j,b컫!mq?L9gE~Ҡ @5ȫH"*xQdFAS p3WQʚfύY/qQama!X{_[w 4K+:=lf>cqԮ\T6s|M~n54}6Ի)sB^%d{r`z>鉈Bif/+ZƭWiN@<"1 N E9Py@*~mdQ9A1VC 0[y-A,Ud[M5Lx^/juIg$NA2Z#ӄIz!&霹,#e\+i-cY' w.Ő2zQ'YWkͷmpwPDPH:>\ɟ,S)jd ʢ IK]yo5> Fr*ƣX@хA5]]]٣/=O {eA|{(onOi#͋3&aܿ.mu*E!gښۺ_ae 3Iڭqe\Mb" I9VoN鐔xx o]ߏRB3nF}^h)*ZVIޮ}fYUy3f~˟Ԭ\]'$Lp,Q4pl^B/k]4,YxbY mЫ#+VQ+1Tt\^ޘWNJ7 -mu^h-mqY'V&lUVxYwjE$uIY53Yxg&IK|Ϟ{:k骳@{i:̀kn"nV )2O]Ly{-o.V.TZجmCSgPq(5,'Sޖ'-F<>#X2i?(8h #6R⩑F1So-vgSл|yߺCc0! _Π? 8wdEi2 L2¥j~-mY/#.]_W??|? OuN/ݛnxYX߼IqNځ3n,ؔWKs]":3J1: e4>2yy8{};)V5W?{zկKpӻ.^;0X@Xew8ܟ jdcRbW p謫l2n P&w>zA$32!Ot$Ȉ6F% X~rϑh_ϑ9O9-Y`4y㹱I j- Ig5"F $bs`y F=I6 #hSBP- K}$I=#F(ɂ- F2LgeWp`9`#*\YǞ| c'P7Tͪ:s銉ıLo]HHhh (:I)H+!lh"U]!y. ZD  \Lŝ^80`ЌQȤEkB-M Z֕2!)r:Y 8XGhFSRyCAxB$X6h Ml|at mBXz`:c$(6-Rss0 ?"., Tr֫zbT/:vJ:F Qp6ǀS5Q21 +$g Ϯk?.႑³rDmt5\ *#'"[326<)m"U$Uwyp(b4Ϯb mXۖbafە\ c]l :ى755Rϻ<Æ fɫ V1!q*o@] ]س΅K,j`:75[͹ piL'rъP-(0dI@̵z¥sl[$)N$J")!gSG1'?L+Mxj;Z5q6;VWN=?ƬKk;> >d\k@i#n5y|,H2pEW[gUqm+x̮;z^.4֝/n;~ZX;-e9!n#Ȑ)!7 9iUMPOB+Q =T;[̀@Uj7M|RcgEJ8KU8è\,ɷGdز:y*>=$j K8Hf"&T %HJ!Nteo/}6O)O eFI'e 'G"K8Q2 FSFgakW3"Z6L Tw D2#c#YP@T}#⏽,.'{tef32"HRJ\5 {J0\(+|u ;z2}R_xD|5ĭ#ΨAc ;>B9VuK.raqg.7gVg-F8'$:*''5Bؙ/jwo"Ϣe)Ug!tzSw{BLF GI}J=fF|љߪ69a : q{n!qp1RcgOҝqfkw?O~|7|v="Tτ^^^ǖ.8p~ o7{#ijI5$-45Ú1^^p(杼߻pf;Z9ELcouɦVr0Yĩ%N>W}7+uGN|Օ2&TpNk\Y7~+޾/?]|Ç ۿ G`} o,?w2/OhZ47mEӂɀo.o >Tŵ6h~ /ww8uVC9>8z_AQojk9ITXJR2߄z 1!?ӸyЋ6ZiM]46sKethRě1Eb~2Vz&j65ݟ6!8er`3/sH]m`OVl/Zn6?mnѺ}5Y|9BUEݺIĄ-~[%K 5ByCV+@fIu\H@-ZΌ`9xIĩq!BAۜ(` <1ؖmv8 (緃]wvs`>6B0އ FW2 8V~pA Zkɨ,H2h2)K8*_gL~-U NrƁUR 0Jlrd<3 Ӛ1T YY/@@fϾX*:qblJ6Ųd2`,y%yg[jYݒ,S{z0~uSd5U9]n 'Kr&;fzKbm6I|b3 p%P'1r Z'\2u8DdG4Ϳh>CQ 5PDpJK+%vWz)wq j^ jq]/gP;r1(Ϗy]KZ?V)¢}!q*Dꪐ} 6I% /1ÿӮk%nzZ([f7wO|Ο`02?y0}1β]Ry NBPmc"^6`}@7˾! KB*`dJAQ2#^\sb]H"<*kKhأN:{C:f4%D2 $( Xʊ*g(nt<б%N:M4퐚5JQӫX~eܻ`[`uo^J۫!vqO47%ʖME:>6.lCX.O_Xw]zN~v=Ϳ/Ö ,SI _Cy7F0aP(m10Ema\FWH17N:Bц阌S|i+-@ Lu>F"E[U*TN8Kdߖ9;7ZEWE5,%d %&154rg:x`OY/Y# v9i3c"b[_g/rD~r\޼ۖQ3|Nz߆K6\J;%A +T`5B[:/xR^ȳxINx4 Z?l׎jb<Z 7}h;.:t`!3yUo77gwN;mm5;H]O}-7~_h9*̒-ӛԛe~N9;YWe;i~˿MrGWZXb;pDuqr wzy wOhfbN1(G2\%,HHL:(#X<&YfETxS; u>D"XX V<م~pzs Y=|{6dȴ,M~ʿUgW%Z*-JɁHelF,,t"1F KU8ͨ:؝(&!j7G1bëod,(LD&D.  H,8ƴ<"Cr"E^-2JxU;c_`P׃dM2xddJ <(+T6F;iQr5@dBy sq?Dh4ʺg EaDHHh)RH&9h^IKI -p\nd )*W,iI|DDDAhQ(d N*>T0J$bw2 Fn~3 PLqY?swA-0>7UF_r88hIzL50g\(+̤ٿyܩמ_#O |9ykS{.Fg /Na}r\\)9N& /&S47q`'hGVQ Fbj7cz1xx)8f)Ȇz7_ꕺwkkC2%ՖV*m[I.NɦǏ/`;+q OP9>Vi^7+z;?7M/'107kYrзK E8Ԇ?MvA~$[tK7mͰf mmfy|@E q8m`<'mo 'nmͭ6j۳*FX:;ёܰ&hR/_ Ͷ˶vh~1 w_ _فՆɛNl, ; |F5URm]*AɃT&&D# wvgCcey&w:fpԣ$l ~mmc C:$xbD2+ԺX?FۿҷQ]Am Pa$ }<]dڢDd\kE9u^isUBD29WCivN-bf)(c"v##$)U_IL1*'6lBGn1G G/يz`Xŀ]IF'VIUH]qf+I S[D x!I8!I8!I8!IB!1jH "Ew=- UHTJ[L@`K)P$: -%!_A9Z ;`~>.H+*refM 5q.jtdQխ@hsչzrbO/7c KϟKWқNT(6f]!d5W{Px{J tJnhC}ǃy1?N-th^73吋ld=P<~StW-) 2 Mo;:d}s|23cͭF|>]12k<+BU6qA@Y :@|:,}wO5F A FKbIu\_^6'#0ǃٻ޸n,W :.6 ɗrs4%AKbO֦%z,=/T$rCƵTiT|캌MtFT Tu(eD3s̹'w?I'Go}Gu#UcޘE7UR~[?s LdkJ8 )ֈ-I݂J.*2& ֈz2kg|NK!It5jb-XB6KRթlB} e%3Gl7/ ;}oS'{5mfNӼb0O,3xWʚju2owB)!&ʓ[r.PTklާA(KanM㒿./NVף$~ M_Mל={ȭ ޼aV'zT8`"U"y܃5Y9{:p1! SW̑"DZUԺe*%yAWV>~vWmRd+'č7d).ْW!ϭ˜C:Y9^/'7q|'':A꾮ǟ9oyZϙV]𭔖 VCi Bf]_[ZBk)N;-KCsT/gt]lwv&lYyANf_???ği@aaR 89. k6}#\#*EXKNjF1 zSjBv$fՋa16b R]L:65Ϧ@J%U=Ր3aO5e#&GH!#IGULOb׿m}lD-ڕw-.}yYjldUe{.w_^(giO?[?=_-pz p]j=_H{/~&*k^_~w/ֿ]źmi5fvI=;?9<]g=|_.֛7O5X bagj?a+9cp}:l%*K/sFl>j< ? Fٶ}gY5%gɦAkx(:Q|%F.XQZDRڲ d쯏lgʼn/BɹP8nϯgGWzc-AͩKhqqB$M#Ԇe7 3棓 wqUlo᳟ŵT?>^ 88s#֫;?mk|dM1Lڮd63{4({;?bƜ]cR{$>faSO&famma|xmC8,lsvjﳽg7ۂjC0^s j" y^$sϮaɻny P߮ =ELW6]IrvhD YmB\N:B}}M.m<>~A7,CN`yzvtry;b҆޴ ^PI ]Qc?~؟,])yϕen9ʪȽqZ\.q/֭YpHݴcu0n6LgGHEq;j~@L8Hk77O5_fc0gyr|9G_N (v Y j$RdG@Ngg ]BBXݜYwc c؍]v͔N޽WM76_.p?,\g͕5a7{X22բ,(`4\P&֡*1y.Ԓf^R*&2Z470B#V7whFko0ݜ!ĻZtmבћ7f _j\!UZ+4[alm,U#bEpw)t Y34"R6=qj-w1Ż#Lã5#r-Má)&NE{!mEEld C1:BQ4 !劁FTYZ-慽C oT-fƔ>?„;z]!ب"?0i@rd|M/OȒ1u!%UW T GU^ؙ"Yjj9UQJVbPͭ.:Q:oa]9dmG;ʥa$Q_2<$fN08$"[]FK OxNӬu1SX/Ag!ՔK4nȗB2f;xrR&U.Y{V+ JAuTvRQ]J u;"TFGLxg@HS.22ƲO- OYnE D+Nk!N0:85Y pWKć$tn XjiX Κ ؐzzy}vM")4[RTKլ8261 ]Ml\q/j ETcd5bx%]P +Ԝ e$ d BX& H zNcSFc#nC7!vT9 c2W9^@g|?QMk2QlgJ-ܛKȠdDC@(M VWncRYD|1 RTX}G8i " mY;3lיxa6VW7^-ܡFdD="[jyŻV7& lBDB$aUƫC G^ʳ*[БJJeq!$\:p1'8;্g5:/cЃZJ#H VdD'pp^ⴱ1F>SA^ $/ݗ-Drx P x6cHpUs05WwVU6yn[&K ^V3I|ӧ5.}w][nT:C:4=I> \+= Ǡ]j/)6hpC*st]H_>0,%jR"2[{g6Eh@;Q:v,hf(ԄL܀B)B!Kڝ2;h ΊXXj59jpZ[b ΃pPD pdI=Ӌᴱ63k)Ze'JQ`?A.j880"vpZX0fa!$ݔ(wUPB@Vt"B-КOdib3,vL#ʃ52fo, R)M۾VTEK,S422߬tEQ$[ |ҽJpa/ R F[qiXq =󼳛9tY߮E2a\יؾ 2f5$N#0(5z-ۦmQ9YY:Zm+Uk6- ȹh& ] `LL;h؈gE>D5@Eb*C\@7""w[hO%\jQz B@69&Oh `H9@uoުɰeP>Ϯ'lE]1("NsZb,M3xr5pM9F|?$o^^0 `\8BaLE#HTقGu237vy\:\SPF% c80'y[ 1yGI4R+k֪TV% t%QZL~%e\5gr!9tMIYGu~9J3!>忾JF›-"Ap Wڂi#U>`(g02S M@+u!Z?MH( AOr|d8NԳr9XoCnV")rUj. /2cRi&/3J̐CHS(]q% &J# :okguWWz%8 Nu=j?oz/^tttv/bԖ=-n5iCGYGUS:KԤLeٜPV ;a7vBzB =_xUß[t@0'Z9V^R %Wa(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(h(U}JJ tOG sӹ z+\u(F%PʺJJJJJJJJJJJJJJJJJJJJJJJW "SR"d@Q}C J XIf(B%:3^ %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P}mJ  )+mHMar. 1>%pHbVEm$>fFGUW0>&PVcg@rP@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 03̶K~ 5Mbp$ h u} H D>'!.;@ZRJK_qiu)ehY7]^_?|O~μc-/ѹ߼z>%^D@/r)yG$  f`L "#+`O'$5THc4у-N iY'^Ջ9`Ÿ׊az jQύ`ɧ?j`c^ց߼xWٴ#tayƌ <~p>`.qB||rSnM&kdn)~]s4u CjpF'z'UAGf5]?k+uD`ZF\XHp9gre+Kn6ild&ݏ}(poڭy9ag [a?My3ag^jߔ3W6~2^vv"~}e^Qط]Ժ59UaKĞ3+"洣q_Vᑟ?.Ku3NW[;.: HWĆNEvs|s5+Ng++?tu?/ iaViXu=/^VjY.ֻq<7;4~Զ3x1r"Ao|~N.Nǫ3r O6+Z)ڤqnڂ3L)+-ʿcF_.ï'dNLDey~UʅqnZ0b߮"PEHJ!iiAaz}~X(@,d k,)P߭p2!C }oE+GEM-v,st5dWp4lVփcJit"$’SQqVf x i#z-ybqVPjrE;(b ޅ<'#}/(ؤ /JףEnї}7E }p@i5M _3y"^uKЧ24:W+)j邏G Ƙ-ga3Yw~lKs/@(¤6Audi z(;<ъ yԚd@TJDIyh S"lMPpb zڳإh+X CJ͘pU949B`"NvtG|Q ̤.IDGgqZ*UyR c6#S ,td >`w- Ȱ,RCCxq!rP^AylLy$2 J>\ Jr\s%i%9\IJ%r~/0?‡gWܘN0!&jt X]mU[ 9X5s\w.U }=~\D ~VEM[G^2)۠}RÉV2*kM:ET^ lS e6n$ òT{p"J ~f& 1 7RxCLTYm=0WaRVhH۞\Yw"FdR"kG`5Jk4PBE /'R|a 1H,`'IT f nY&ipSM6^ ]Ejpz1\JJ8_Fu+t?/I$S;|BaT{U!aK*NqG ,ƅ_=G-j+JDZZ'E}gřL; !Lktrmԝ Ww{R^.&(`fE-6`MH>sTa1=Bt$4<͢Яc,z私wLnsm*C2%W-:|9O1yTllW*m=Ai֊KXB|D}gM;TYmZzz<<8[gk˙0WS{݋˺oFLQg W DsM骩\ƍ,N>LVAw;r^g;8TcmuȦZښbZt:Rn`RWS|;pvŎU6*`NVׯ;{} ~z{ٛM}}p۷瘨5:O7,¯>TZU57*C IR/m UE7dV DOM\g4:NgUN j~U uˎQIݪTTW&D4 ί1RVmo_l N;ѩ Mp(167:Mc_Ef gexPQC{NjufДҘv0W6]9355JwaG=< Bi@zwdjJ>i"9: AZN"5O7]@P@Qt- D0eHӽ$BsUӷqݺHksbic-2#?]34!GQc{Q{Ч .=?{iPN>->˶"g4`[($TSOKF8"2)M_О{ HfQRt;H dLh%" #6ʸ!UVݔw> TP'[18T%ga 7Bk-Zo< /Z`Gfj(w˜;nAy+Hdήp{ (+%i@Ĩ`oWɣXymM0踳Vx,&FPJC AHRVa S띱Vc&Լ豉MkD[gdz67'=V na}Sek~[f:t QD.w-GHn=Lpn)z ^zT إw; Q2]WUm&ER?Ā]L9|]UE;Bgltެ / ݋ԽwJ }<o7G'ۼ˜l9-{j̢8yVZ!c65͟~kGo5vhLz,K>قCi#W9nB@(١|$R=r[:nڜgw 3SAE2zby9ݲOKwK޷2QqaĆ3Ld>o仡iF4.Ǚ&OۋGL@4CdJ8JED5!@mD҃;S^d:X"s!,$S{+%Rga5Ғ;\ 8fkd/r,Iw^#xxߑYZhhԌbZH) ɂRCZ7()&`zSNq#bd XP1g&j2&v˸<-lmfk u˶Pe[p)r4N; \m&C;0-Gn9E# kfC<@Py䏋bmJa˦_z{zK\oƞ '9 [w[(mP3䓚YFI=fH#HEӳ i0/jp$ pO>@k sGΪmup] qȥݛWQvmfT֕j3 @.*@Yj(j#g*i PAV ?*`˟z'Mý Gnگm}zZ;*۵inN7)4P}SԇAnBEWGI9 ԔgDX}ti(EǸ./GF<K1)e,!]Zn5X9#VM/\Xk ڥC0 MLryå 03jg>LLj9+j$>K9 0Ou9R[ ,!3/a:Yt>[LTUW n?frMrWstyյ@t4x-cdFR+J{cBRs(ˡ1$NӚEոn_^&{K PKg0$b-G#F^Y/nw޷jP|ttL>ݎ} :7Nnj̴rdXZ|P"ɮG}d(03,MUzܤ> Ȣ;'gY4VH޿L[ IO|>rRqS++sed> Y0`N0,,Kdǂ♅b2q%,Ʋ%mܱ')Z:-m$n+ie-OZn5z^DB}Fҏ~/oaD[)u^m1dO\qo=mђ[&i|ξ9dV R5-kVN|ɼ.TYmE_kY/l@P08lOx[r}r9< Jj1QlpH)F:`J(Rbk5ώY-0Ef_rz-zݠ^]iu^'YRjP{*>C`%8K9CZx| aH]d&oپD:um/^D%eY]@uiz@ J>rj{t$9KTWOWAF5CqnUsۭD$vOzaErnJ'Ѥޥb6kq@`i\Zž+D%ͦKTi+ z0*KD-gBguՕd<1rrYm.O󙍐jag8 @SDEoAT?a409wxqItA)e#B h~ctQVM?\[TDjgs'R^iVWYM3Xs+ P}0*m֞JJ*Q9x># SWi iMջ1snұb?@NWJ3sS!pCƊh$`s.RWp8P5hSI7B.t/qI/LpʃM_5o^-(NRQ2gxi2%B3gGQD!HHAAaZ舅K-hLH+F+x@vBvZiӫ,/m<u߼t>%RDr+6r\.6H6kJSJR+*JRC `l&r99mD$MT,omB*r\.r\*r,79k. 4ɹ\.3 f. er\.Z.r\.r\.\.r\.r\. jB)"JjO+wPuhY~NbsXZ`4:Q0#+JKĝ4>F%HFHaˉ4qi: T+6^ԖiF]vkCyvOgp`z2Y,r3.DIAO=LBQfKd7n"-K>nM>8s%!`fqJ0%49CknK$"`'Z$\E$A2 ƨU1kQ`F | \5!Gg-JkQ~dIIq#`|fY\1Hmcks8 sӑ[!FRlB3 F.CcG@ȪT,h2Xb"\B6DhL$xX?+@ErU1F9,c$cs&1Pl*pBC4beDJK2A8s N,c$c=#] np]H C][gPvȄ)|i.èk҆'Z(e2;˜i8<4yAدy;g\5#ˉvsݯq.ii 4FIf8lXɬejbU~k(Eb![ZcI",^3bͤ [k_xy6EzxͅpZR 6H+띤^QN,BR ,J!7laV7](P PmGs+:2,"1rbrX+"D4-9w\-jNcKv|<JG"fË`>;>M~J) Gjn)~mm"h@WcL ǖ3]RJT0ڶU̮0b S-m=w^Z >ė#53 *x#1Axh hPMeqW!X-"ro0K{1Ɩ`k䬒zpt} 6Oi!%>7nHw8GvtJ,͂WwW5oZhזNDimP2+Li%FVl6ѷEH7Ez2pE x%A@#p,GEF;ET^)(AM°dZhna}RLotK `fS:PiD"##`R0H WfީB] Ց HitL*"LeXT_+bg!ʓH&H @Sz.;U؟X_$ .XMT(E"@ jt N0%PpKE߿! 1H,`'IT ۈhlHqhxHviT0\$|d0gP "ǒH*hQt}j!|k.G2oU &)c5cz( Ƹi \;)7u}E(bc2&; 䜂!LS7R J1uf8 ׶{w<>]]5,ÀP& -,I&W1';ˋE7Jc,>LF燾*OpWܘJG FmJNK%pn+ScSd]Ljk-WJUbO㧋f_+wOՍooGW b4rB̵ӋU=TOvQvTŴ_dpwZ֒Xt63,A}+#F1^ugHRK[%VN.kuY_%OZHj?pӷC_H|~屠ǪXW.\ /?WK?{7?_-:"Ǘ7&᧍? M˶b+4]0o_]f{_|fu_evdmŏ:jINU i;|WlTugX'*HhJ Q!ߥݸ3ہ14&vopc%8Uc,0JS}4)fgw?6ƾy4O )wR:fxC7b|FzxD5F(Q$JG=\ BЀ٨{nHJ'Iɰr-@LDj3 .:k ( a0a:!?bFHބjgjrkȝdh)G56ي2RL;=F1sN|&j\T" EΐP$CqSpO3>%bǠQJ+mupRo}>e9T<{j#jۥ ٌJ"1arQQ&U+ j%g"8G$*0:f" ҄p! ?:7z9y2,4 ܛ&&X醱x[_>λe4LTsn;AoX+Sh8EWfԤqX܁56!fINף?5I5EClK*2Ĕ#"_|i3t>͞L7IT!0ӹD ,HfQRʈ  a\ (OubQD9O-/)5} ?x%͂-p2PGEJD5[kM$I /7s;eai>V'K73aW{|ڌ10@mȪ vn*Ɋ̌EddT2lP?pD@T9kS턷{]}hryMџjp$ K(v)Q|۹cNIi SW6?f.v * LC<=dV0b8sZ`R$8&JT39E ) cDI*F:ZB Sj*vJ5S7N3U ]OIM3d<z'`usщ~8-2X0A:m)"EXvΌ&k2H%^uHybgNJQ+~yf /A $H"T:y")P<|i6m^2peN&[|Pފ^ ^^ ^ʳb@Ĩ`oW)ce4y0ȸVx,&荠V x$ )[PƌL"^ˈi) ih^FNû n~RAO_5&/a9~ޞ?|jT7p3Xy: Y"v륮mvK.ܼx V;Dn$mI@hsש{52 ջuYzs0Nmͳw-3hڼM;/\i&цO7f|s cK/!WtMx(2jUC.s2 79i 5)K Ɲ}y^:DTe|^ꮗK 4c-:bLlwm38"aP'ݏdv?QٺSҾ0Z+&œc͘Zz$QؔB"lxpy57|30bQG"`"R{-a$EV 1mihOqx2% ;x,x=|shv -x3evWD S ^J ֒xN !`pT ( "շ'xа_3Ӫ;xO-Xhm 4wFm6*GT$;A Utȴ,]w |u .V:qަ(Lz,OcYrxD 1X\f|(m FW9[ !AAkf{yǗ >0t`tOI7+;9-|Q/ԅ~>_碌fwyy\+ZPr ! CdJV%Ti"ZB6"|@EƩvcCRl J(;7 qfAn$*Ű[)rm߄k"n'9-wqI[㓽:.nlڋ3=b /V(ԒD 0;Q3b0i!Uc/$ VjK I.L*$lh-R&| .U>M*F${5sf"(5ckl׌QItak.-B£¥ٗѬIS8XG!1z ",iP΁DFbu 9$NҎ)i-aGo^υw+,XI3yA%{Z^L8BRJ$j>=J=d@JJ}A VB]J*v) HJTrܩ cGCDMFa^Ԝk`"+an57ZX >Lp464,~4YΠFBa/Od78y1gܴOcU2Ʀ؟Ua_mi (&d %4(XIڜqo{Y X]?n[!putL`<"Y̘C:,\ʠj-<ӝ42-c1[/n+mU|(_uXkU;<+| o˛ [eL+@ye1݂Q\%*(_~{wT\Q)b7-`B=5cq*Z0ec3I{ʷSJ͛Υq6[Uxnzo;k}qмK1$KU ,EQ$+߁ Gm9[tцO(Mr 1ܖ~p7~ژ: ;4yIyR1bX&]%B2׈E*$x3^,,4y;$V@0PUR@-JA?Tc%k~EEueRUV}@yRN]}Du% }?*w )X9g/Y}ۛjb1t !~ ؽ&Yl`q 7`rEAk^֔T0I򁢜{S3vuqfPY8|{7T.ٜj0x}o4 ]J,:PRb0pIN. ?ÿʄsP^-;q W:77\ne"7rK"{*7 ]YepW(>N OlFn`I,ƒۍJ˒;Xr]}KDub$.E]%j):wudSWQ]iX55Ka)֟y>⫝̸)dлWŕGv;u4 7")DT]RMj~JwJ*U©)rJS_■s:M v~U؃9nhI\8ʜS-dzc8xNʞS9-H9 rPf4╝2h4^LB!TI_@4>\ע,}X>`f>>xJ%Y%V8.BGWuƵ7(zdvYR<^e;V+gZN1K?5*Cz Y@SJx->c9v]i)uuSiWWPT]Mcz͘POGO|K\z߆I|w|k~8zhI{]e>-8m/b0|j7 n{PB$^晱^rc;RiD+mC}TRu]`Zƺ\t.g ZGK!3jg>LLj9+j$YZ{+OAMmI[zܒBc+~^oדܐ^9mU(kN(֘==c) }/8w e(i]V{c!%DȆVYNXFԂANIj"vKMFO`t{bqM~}ՇK钳Kg2$bMӏ丸r&݀w4B0G"`KM-a$EV 16J[8gA֔,Lݤ?m5@izdX;߿6_Nϯ9 LEurY ,Rg2DO"k/%%D )XZYh6!7idU@;{YZ)0wjG(*JCtKm79'Re#.40Ɇ *[mgť-ALxSTʴ`5ͻHow  ^}s}M<-VbBc d)ؤJ1#WBysݲ919\k>̾ օ o,/@nu{Wj"7*o6zߠ*15UHkcW,: ?Q=MA9]LD䗒)Qٹh*R]REp7VYuYӇBRP+Sȥ:eJ3(VWx7i87i>zv7iXNZ)׬^stQ`FF-%W(RǕ;i|J*TP!-'ɨlVmg`w"#+6^Ԗi9HZȔ f@㍖ImccIakOyy#FZ=ǓЕgV9XԞ^B=^m*Y<iWG ~یf\1D7O—%^\l['e 8x255R_v/Ge +$]Zug Pz:(G/*oUXX t.iEVac6R!G`e\ .Iq(`>ƀ=B=tZs!%=@:Leֹ*t,p,OXcM;6&捫(t0Sw]٥:|cDi̾1MܽgO:P)S"iw8\MjuK|G8̄XU2:mJD m"SACsQh=+N(HN\gfe1',ɐ"Z.ݞ.p" 'c1!9RGt>*T:Pb0F憩`clCba ptq9eB Y]2,6%ui v6O$VuILWWb$HъRNC#f6!eo S+<OmDDc=![aٵ2;8x!>h]tآ>B.{R)ȉcN+lfIc[ogw1g̊ײ"^{/唡Bs \Q!rL Q2d!O;|OPo:ɺ,M š$2eAXEnSH;xBRd.` @9qDz]YBbwi2!%h4#V"eB) ve^l~!&' 1CT+24%gQ2AȆϒZUXM4LԬ"=$sUcpRh}5y&NwU t~ȓ=0s(CUكv3zq{ez{ʤ[iw'عaLaIN81P6=%5uNNpa'I:w;ńfVs %9 B[\Pč4 ~*(FE|cL^ZԽ;Xdhd_OW#ѫWG6GKM]i&O\Yg9g;K}^z+\&]$gw43a.[ϻg㱕n> E4z]jַg؛ yVzY$~ŪLjՏq|c'RĘzh(;pEwsʧ NzꪾZǛSD#MuqU/4JܣE[oxm%-X 6ϕMvyso"m7Zۛ^;NX֧?ijR9^韐ؤYa'LM4gsE!C<~WNOI6ҙZiM/?"&g+tᬼӓ)}1`쇬 $%#9]2J?ܬ}wr~F#D`2`dqcYOF& f 5v;2V?idp੽=QLdYb:VYMv'`ecɁ LdnSAaD~n=(ހZQgU8{`9y.e~߶͞5O_[Pi}:M$ڳRrd(+ZsFi:YEk~ 3"ev`!se8M`-Hqr @3F9ғe۳8[}+mY[:A7 +Zdw&~{J qqlѾQ^9E`" cc(X ()j (>Jƍ3X7r 9|)Ζq.LsR]>be=OJDyRJP+}y W6hJ 7,i5i"@oe {k̑YMpզ4m$)!tЇd>NVê,/iP5Ji=ZxO5O1/hpT5KԦ"UNX= z'; e"AÔ4 G(`Ct^rPIǃ&rLcD#wKdtw)E1u\Ef1:̑ȰGWܺ^ecl 2 U*ຎZVú5Uom-z=1n2*}Q&!\Zȵ K~:B7]Z \81뛧 74:gCN_?utgEwVٺ}V77>ߢ;χAь>o3]|Sw.=ĞFnΦ[ ]K79_߳K\ek>C1-6ߓmn^8Mm6pF|]sOHHmbW4Pejm'MklݾdivܠsL5wJ9lDƗbq/m"A*Z4fp PhJgc,u%,s@%L"3bfa۽1qCu9J+A]wv5B4> Fg{x.vaw\ BFkY@SƄ<1)i)텧$ @<:9䘤BHNt'3VDR6YY߁=JX4vZ,ǻm)ڮh-=YNO|qly8vV Jg44 XVd5(iL}noq^Y sJܘ,fzl~|LfiεW2Gd8v f*qpU3+Y}DpZh  \i$%-\=GrHyL`+W1u,pUH)[zpe8p>-Y>mLjRUB}0aw8;:+-_1+ο~Π?i4LFW\TLw{U<0 MSQ>7[[0Ýܿ|fWUOZaLʙ#+Xks4pU5p,pU}pU\-\=d[FpyB?\~&sU T/ڑ9=;S^^zeF3YXD@*e|1Ezb9] GX^lk`jA3;q֖ZIJi3]*'+㘒!wj%'E /&D^x>Bp Kj+B@yϕg^KOJE)2+D.p GҟDkm#GE8/6/̞Y`b /e#ɹ̯?ŲeKI3nlVWd+Ͻ d%ًlE6!N&E\q6޺H ԽuR[@{"[u;n>0q'['@\$m4|k53+ *OghGo3f*Y{$~RDc:at(A8a#wpnH#'EHZ'!c=h֫)uPH`)#ee Apt) ,%C#iYE:$$]NACȆ.X%J*Bah7_5H:'ccgB"FDa|ŲA1:9 }6Q3)RMkPjwj F?{eŅ #|q"xO\9^VDҴ7T/Ɖ,Iy~v})7eCY3_ϗ\W5LkLI{Orn{RV[ JǸŕѥZIɔEj~m/fV%1+if뵸 H?|x}(( Acc=OuMһ!쏋Ֆ.wU4=^*x].?4yTcZ>8~OpS?<@٢y{L{x^/7oWamagc+خ,Tz_ .w?cM=@鶩܍ݬNw1 G,XoI7S ߗ]ZRR&E^\|&5]!/ RTmM% hZlG1#1c4u6gU dd10"V`KbO_I긲mLt\ymcYXGOF$LzR ٧0!tAK,sUB AL,TZ#.ŲK,kpza. Ιd'rfBtPlBMT}[{@*Vԉ]7>yBv<xWo%\g< [i[[;.~Z};=8Vi[A L) ɏۧ|^ntsFк줃l:㙱Oֲ䢊9(X8 < KXOEQ \D0)ȣ`΁-1%V^P Wb"{>A^klydǣS1Z@Aq= բfd7RkYYxo_:z"nP/W& K;@IӯV&2ׂAH5W)o+;Ǟ߂Ni峆rlYAK%̢.%@Abt&s#]IJGmP薣$˼* f|K}z,ְ_a%h3Mo!\MLfz$ns7޽ݴnFqHvrhQIΗmsӥ; 1KHڼ;FwWtwjvi8-.(ͣY7HwVtg57ixp-ܹy>G -n>̻yBۻ|@ܳni8z O3Ď7;]N/nxkC.|ភ\N<_`9qnʊj-%R"0) o?6v==|n>M!V37ast;,(^e`7Gu+v'st\)r&zFd Gib1>hS>r (.T*Cʒ[g+2N "y+c1vx)s羼?΅xn#\u#ƇXV}DrK 1JpgDT«OgP;B[@\ؤ# U OF8yoS[2&a3g*hIr`e nMg}z\DvE"1,l+"K!JnAuh@Kݪ.t[ bAl|[!'+7̅C4m3,\uc_һk7Y]2~KM߷'\ˇn|!d)uݺlٕBZGQ]fR9KIbf&R$uv>vb$`ƉRkQ=姘<$@#ר"lhn[lWt5'4~q ͓(gjN@yJY5e|}~ >1}Y4Afs 66KP,6P\qZ&RWF59h*2 C|L" )$34h&SǬ\"#C,nq4+XfǩXmYn mt\qȕM(""tTtL.kSzvc%YiZEUrJ)dT $a&J;1fY:vkluP,T,bkcOYq^gk؆r3 5N $wY.I+D.&# I'͍L6 r 0@ mgTiZi'igaGPOhVݙEڔO%R"()c_ iad,WTJt|7huEx@2FH+gTZUWv=2[vNyRvܜ(yG-<9$޼ D56MozwpiذPd_Wrײ=|taz: U? Җj^r7|+ +A*m c R阌i曰9B@*6TNP(!WP#Jl+ФT|Ut8' M R2M ;$1Nd-8ƵY#spց8q}\LOz, ϳ/JjRz <|^ Mq_ܼ-pםWxVS /R*¿CppF+*KMLQ+&+U୨U4_.ɱry{BWƣ ٯ7G`Ogș R&s 5kHV'H AJq Bv Q}4 qs ǻMv[m ][%v'/ir3  Pb,B(tD1f\>HYX2Ӹ3 axÐQ -AqmY) YP+A?e^=S654Z2< ]4F E)V^!7mmM<45z~;];hTwᡆgZ|㖇Q!DDAc.-/V1 LbBNt3Oc>MOgy5#LFe20(Y9Ȓ .(b˵K)H:ӭdYv,O> ʼz&&H Z1l9^mH,!((ұs5`rvc9q#FA)Ίٻn$W|`7NflLf1_&hִ-9blaJ /]HS:Uig* e]+tv[p(`hPE][v}Yy/u5=|xIȁԠ0d$@6apVkK f"r20R66~W\۪D]u\ e*_31cZ,쀼]nryVhp"Rs%oa)&7.Y%]S>W N&͎{P|;q/Q^Joy)=$3K&++L1q8! 1"\#?]Ϛq\]KrS==PFC`x{ zR*dJJ}R!tVlf2w}1k$`$sT1*ARy_cEiYQ$<:VS$')=w$ r629E%2k$!4AKq2Fy,0+&YB"g9(442ZD.E.2ߕ8[T: {~>ƫyZHKH4H_4٪*Ǜ~E9WU"JuUJb(+20V;Effs/i$g%[Tɐ`Q*[E8 #C'H"YTJglI߹A-|sC.ٵ SN2s=ET"R[bܠ)6V(}̷UZd8XЌD#].pn-  `sE.ȭ7*qIdjF3cǤc,je){$>!$-dʎXLP T0h)PhR(Cpο m/gPu>;P *za5 ni=*Wan/'eG|wZu `%$A,hX߽!WaX@[*t/ה;U#>"Jw `e0LRyƬa*2 Ϡ]p{p 饋Ԛ )],kmYqU6J X#,KXc]u&f](O@[9=j%Mw6"Ÿ } 9i7EnkF^~tv\C762&:.j}%T39(|n7(uֳTAڬ@:#*MŜ&Cry bv#'TEx*t*gPaC&W]/du95e!cI15^{>4nwrWL0JowTLx*lHTDHTDHTDʑR95~r$*Gr$*GZHT0uJnڻ??6($.2}&vQ~')͜ӄ0I| Խ׻_g[oWoį(E\7ҳ4ttٙ4"ty==cf͸~~mHzoٓ_̿e~fr5zA?]Јz{.2h54^z9~}yԛ!#+WJWtUJWtUJWtUð{Д *R*R*R iG֕U)]U)]U)]U)]U)]GWfEVrZ#ZFK5>GpAp T \ɷGHLOzeAeZ/j>""Z-Z$ui[4ҎMf4s#,10WüeGwE(9i!3T 1$"@L dFg%Ȫ1p:NG3fCӏY5IXeʂr*#xrWypS.!4H#%w/53 VUor[W zqHs)/ִ|:՘+S~{ǝ}TSCkǽп|Ow߽|?w|;.ۿ}ЩI %r|I{]7ūrԳ 潲!x}m?,Voxʯ;,\:ݜ{T GHlR)a?YT+Fp-C<  WnV-1Hhq .f+p|;me3<^ݟw.Ib`#:XZ˿mq >B&L"΀F }2gI#q#$eB&ƣFݡ#iUc#AS/b:k#lU1bGK,aF!T0^Bus 5Ns뱎ĒcmY$yvy e⣐r: ziqoF8h1 KKd+i`:IR$Fd<3:9aԑ^2"xT N,Y N*aDd8Sĭ"bT6ey,9[TIZIRx\;A6sR Rd ]$;g3L=q4x\HīLJ=hGՎ e~gbsLz/0?~]ojlA5E"#K ȴO&0LA WyGn3twNgiP/M6fp(]37h!-14ZMrIܠ2p 9Fi:YE5p3"eyV#X\N Xk R `3Ssf0#J "1F-mIsKZ B!Pos{k#Znk.~9invc;T)o 27%E>;35KY]PZ7\G1AQ/(O1 RH+؋)Ey< .dtց58TNjLF!ћj >Ga '2m30%rD#0!W\pཉ]Gg$r)xRKȕJ\d#ed{%L- ӽJ|tV{y9=~{omE-:\+^')n$u}}vnzv orIrJg϶rh㒟%4m.~ AB.&1jPvu3m{xC6rrl; lfunM~oo|flh8\z6͘Y󄶷#I n8:2w#szgk9t6t'<gC(޶ܪ+y\s{܅g{MHߝg;ܧ lљ|rD74oP`{MݟlsҽqL5GP{+%gKF%2$%Y,9\be.Dr L`\fرޙ8[#Bs9~~ r8n#هfhƇظi[8V>p86s H!V VD-)x 2)۔zNHrjpdޡIȹAt%$LY+F:RX;g?^=JXxbZ,LJm)ڮ3YD8󭯋SƝw`P:@/8ueiTp\X&뤭ՠBkf,7|>OtZxkJYgڵZ~%^<ͼ?[_e2e;d2&Y'eO !_L^%-c Xj̤1yXJgf#9tKHY%>7FIgwbr"'Bj :zk]YE٨]xtcH:yR刬S5SLw#9 v?";p̨ؕin#GE@#;w_&bc@l!Җ}dQdIIV` B>WHƺP9ZUՆŗ*(cy/)2g QQ8xpTHjpKL.PZ#cF,,Xlf1 Oϒ_{'Շ{a~_5d2{z?)\](:ʃFeWN:xk@ 䄀*P_ )TR `4HVzpHA\DVٍq6 qy)lv\ jGmȋtG&)L͟|WXZ4[<|R *As2hswyHқ^=$<"K\CJLHԐz&O@S˧]^g2%|3Ufjӛ#?vpsj nw`q9osIS Vje25~R:R3Q*Q*lt23Xj'NO5ݥˀ$|uLtն2<˿N58]5kY!ŅrR:Q' s*! )sL"^{gUgɳg;`\E|פg9Ք׷?Dnϻg_ mTLTi/k8[g8"\{v/n=W ǧ%n?*Bu:08ySG MLBXӃQrp<90>oAT[?&띜?@^D.ET*Cd,qւ2فK*("(,zF7S-w`i}y>~^mџ?/}Ey] M%ſᄌ`xr P + Lz eMXI INu.x EIE:'Ԙ\}k4'rŚ ȡ%NYJNQbT^:`6A@RxDŃ'kJ$]Acd !.۬;csk-w>~|ۣt^n%}npj}lׁ~gyht=*)lSM J@$6F/OrD63{X>gD<Ғ0De흉VՔPJJH*Ϲ$̦pS_sI] TҤ s!όW.ec,$Yhj&FЖ%$9 k N&!1E7csj1ġ]z)jkBF|x /Taxs0Uǚ0f `V%!g nV0b^_B*W2VWҩsuLd;qr3hG l>'֚fC}w76Y;$ ^dxbMI_I{Ky%tvRZj7_ 2%f+}9h*T͢ Z#dW>Oydcyb8}cl"߿}F/lPse$iS:*-*Q :*K3u#Y:5NuATny^||)>ơܺ#‹Ǫ+.B(?O$s^5D]-嵜DOr/Ƙ'QAGQ6s|r\ΧKh^!I7~?saX5%M^rMz^?X3 O_>4/>rÔ´f,jZMx3FQGݘ?n*zg/-Ooezq;r/sA 璳$p?^A~zz҅/evS:>#dU4h}bUaJW_nBI"ڣ;qE{%\ ǯ5Z8~Nj?לѠD?j*>ܕ 9Wp B.>AܩM$:tt4&E3D^^!#v*Cr+gU&`b$ֲe#ZW8y36@(mzZO8]XU܏*R 1z@ *U:_?zGYw4\Q^Z^nKnW{'vmu֢OnR'sLknպa~ICnαݼȲ__y0mc3=韅gM7HO|`fˮu>zǾ)8,>_OgXoƜm9!9!oJ)-kUb$hw) ,b y>v5tE *Y,513GޞsUi\ɎxƺP9Z_ݜH' /+TPr_tSd4'sTH\rRauG!+d1ج;q;K9 6bon“>%-0Ú=O냿jЅd8/9bS,eQbUu&/}j 9\m: 1"r CP1J%+l2 $ Y{"grɺƈݬ;;Φ!./͎KAm?`]uE`#hVzRޒE0`2S)ֻ]j{|hcCNs|ըsT)'-c e86K#cT5ՏT4Ģ:zYwvRԯ (+0^ "6""_oDf|=7'@s!,hY9MV[K>&}Al-.6ED@֊F CO3brM6`Uv)hU4Ffٍ/ kp]һfP\qF\q!pEsWze'Ķey}gJj<1@^;J<}FUjeRL ;dXq#אV yTH Z<3Rp(a*ΐ,eK@ (@y|4t&o& tLmzvuY|d.nnysݗiC~l~b^R~> U;t3IKz=M NzPetIZJGJu&JoL#>ScoAT[?&띜?@^D.ET*Cd "g-NJ/#:I)(z¢o\|tޖ;_>d <?/ϟ>{&?ܧy7ڿ\_mz-)gMs8!#X"B@/f(c Lz eM8A!)7"I~0(HoD.XY9щ K)TBLKǜ##,zb"dJjN%.1SRdc`h֝P籹;k?gu|>Q:dv[/7𒾂]eGE Lvkw_NznbUCA(N s-с,YxQ!癣 Vp͝kk (/#9fRTIfXLr_ES2 Yh8g%ױϧ xR,67ȭ[f^spu\T2TotDWԮ^jKWM^HUfY@ʠU '{m.Jӿ EɽKzDFz,v9fU1(Q"XQ;LGN`s}UËi[_?2m'ϥ@'ր҃hM>;>Vpf[>R$w_c}5 z##iX }E'TT2#]yM1I| =9RzP0l`ީ8iQ * 5(͖%̂Y ]($N <}2AH<)r]1[ U])ĹCߗm4ID sY"iv ë;« ZheQNS#f?!ŷQR*` gƒ!wɷAHه7g6o!'R-$Ȅ R: VRs=jJG,0il^lSPhX2, o~VL/Fց@,AI &c)ŌF%Ȫ)rxNW]vLB$ (Mš,NXE gKlBF2P- 'ğ);_+1 _, 6 ISf5+{A^=`FEti4ML O<7`..̣ !4)lxDZ}60-vcbZÅ8HT1g; 3?<.LZa Aj\-Šwdb٧Ts<ʤIz|P*a|cE{j9rީ}OH/⽹%.$Sz7:;2sǸWk28a@57/bfFGr Ff][\MW!x\;'eEsn;?SS{!_}H&D={4iZxq\h}S;+=EMN!ӂ+>QxI{{BJ3Gwo;1/s~{g͑ =VTuK ͉4/۽?O/^_~?s_ѮꁋKR`m~]KNj?}FӶiҩVͯt*6?yIϼ ~]?Wq:匫Oʡٜr䇩Bb2?r'M$Ҩ*Z8ߗӸyX8sFZDH {ѩ_Zq Fv/;Ue%K.DRG ۥ$">mkK7NC>! ixdeBfƓ팺m{G!F29n A 0VYLd,34&xiQ6'̄9+ģtgl-1>]cglM8|o5t\: g(( wkgTF ^z/4T呣k?~{}S>=]#0X6.gd7}PZ9/o^'m؜`l&F0_к%3޺^ȑjMpAYMI ]Br_g҂!ZҽVŸ6{ɳny6551!Z7Lw\޺{_L2< Ljzw^ߦ߹ Ǎ}A ^TϹl{HjG4fh6z>:_  ޹ZZvk.>]JSՂ'xFŝAn6j պiZ_3O-bbZu$N7%B_~o_̕#ͫ r4+/:kkAon9:дHҐD-9OC*g2N m2 e@d$*] KJu@83󲙏QbexBzG]|%}tQ[QIM.?oFHZO/7|j Tqدhq!?= \W/Vg5E34O+2cF􌵜;;R+Б};l'n=}XMOXj=t~>1@Jƺ/+p 8t]'aH&|I E`r4$#dɿ"`QKx %s2/ggJN?ZNwUtrj9]ZNW-΁]4sYM^|0kzpr,JI*JUk9 ,ȓ{dK 﫫EZʨX Yrc-TZZ2nt[d+J[E'-H A3Lk;sh -ywNxU)w3 l@u="LѦO m&h:zj,݆ JٟEb4l=2f ^I1&* 0<:0&ѨӌDj]ܡQ4-4 91'EDNu-nSG}KZBe*6%!ZȌT ;MTʩhR\٣[c\cܰwok Vnm8\vuUQ N}2.@)8@3$_T$mxe0JJ*"B1E׉Zʫgs"~1šۋkn+zp L>\a<.>%vųAu|ȽYR<]seD;B5aɵ߽/fl10<4\BO_d=<*wO+h#(h^ҍ铏o9~ F!_o͸==.l;#A(1t1ID1tL$ID1t=i 3n3}[oWj#!EhY+T>W*` g ÐH [Sl'i7BO"1D,$)D+C%A? Ve-Bc29A@4C,0i"-Fiݮs+>~V"FmI BsAI &c^2Q .L s]׳fkr:o|O#OvXWBbyQHo4#Xً ,$s4*N nd_L~1u ]RG+34%: C hR&(y\!2maٻ6rk-б~Իmw٢EO[,y5rHd$[%; C;yt )BzJlppeEoelz#;#?kI$U^a{?Q hs81\Iob@csfJb(o z`޿?7OW@<*(bzKd:*SHv(p CBU7B J1u',õܝ0El/`̬! H ˥CR*L\wM֑4° 4gz:; wSI3J0b{QMCçAy*MN5z8oյqKBJjBҽx:fVuO>TN. a4r&̵A[vIeqvi;P[K%Wt ioFf'GX> GFQL;h8tyf?Y9TkouɶV* |j.3t9L.U}[~^:?'7o;pRsuea70?>݇Ϗïu~׳îIpU$e 4Ѵ47m*E u3۴Ksv݇ghZ@~x/kGM֫nҥYǸ: }Ob2˞k۲T{ܵ Dxh;>Ƶ1ɬՑ ˜-ѩbcvJNodT`b쏑YYT`#)E~l m+u+W:՘F( { 6e`٘ T) {)u}:2Z 4J3r99{%PLDj3 .:d4PHZ 63BKI"q Pe*iX8ׁWsʗwV~ph 'uT$o_Xm,,SC$bAs)ŝM\m[-ކ;qMo~Ȇ +}]X^o;0 CRgȡA-+8bၱXh@a(%bU1Ńt NV12X k%UL`KE"@L S: Lj4+G[dEZ$,ݳ5/`u 0mVK>Msk/@57n;:;dH!^L$~1i6AZLRFv]o,&#Do2_dGؕ`1PԌ)V9{Jva{JE쩠$ H-lzce4iqָVx,&JTcU!D!e!x0k5f,`ZFL&Z͍Y&Ζz568t/lh7wĤV񨜾[T8[hޕp]$a7ZVu&]f7TW=Tu¨ilk!׊VĠVqc޶T*O@ bRIۏ.tE]z}0rC2&Ѭ*,ٺ}A7W(+[]oj~m<,w {jpn?bK^cU?Onkrݗyc62.x3k!/knQ⁎*lk.7uߺyp_&̑,"DAw׶P"& uR:}Otg}(twZ1k4l%¦aK;˔4VKxd!R&R/uhA #(H8Hgݳ>7׻cw;;͸an䝲U#H/.3㟓`Az)%'XK91ܧ<GЈ [Y|D4d]ډVYE 62( F刊 p'E2tg ^ɳ^:N{l&={iuZl+:@%O nϷ TGxY\Y?J%T̙dLPKi;rO :nӣS(ٮ׮Rci7-@O(_HJ>x=y9Tt5!2AQ@ U 1֡HzPK#6Pq*oqI0Iqc]QQ8n$*Ű[)rryq5ņ1D6Of&jH6d~0rdNK20@L "e{!YR[jNrDP!a<+2 %ZM@H1 N9ť_U D j0DȘMȸ ͌bΌ½bY6:ni UޕdY=u /~9yE#(LwUY&:b!R `J+F_C[B8o\E#aHQk % 6`V0/2 ]I;Yn.q6# ry,mv j;nsF芶 Gi^2oS6PEVc! 31Mkع iUWZu U  XCrcH0>r(AWٌR? =X1x""b:D-Q a $paWK.`Sj4gZCAmPv 0B+n8XG!1GbS, A94a#ѝM-]xpYgYT\$qwZ& 0KS=ڧН-4^X飗@t<$|%3 .'OCpE\0uVq9!r|\ a55)8L0+&A8%vR0|%gS73%)'Xm3J6[P|QQ6}: A r}Yv._/}Dv+@7%= ʥm : \|wë˰%W)'%g8<bIXlBqEm:.@]v@J:'F: LTc>jue,:uqD<Y,CjX}>7e f2-\@_ӔM:FKkفa=LnD]r 04"\Rw&N=a8፣C B4Cz@SfNc0DZP() ~e cph#>. Rl?1v"FEy9m>"w"UrJl.Q 4sZ!$'OIqOVrAG 8tYP^Ir4X1$ !ЊB "BQT'$n CC8#Nj_H##Ն5fA0z]"3ci곭nѷ,EjѲf9C"Cz)ʭfI2_i7VLS͕\1`)^ɹC8)D^b!lBXu0 Ɨ.I.!p[\.?7?),S)a2^Yf)ڐDst朱{tMj Y&e}2([M I182Gϝdp$!r_>_6#7Jc-Rbo#O cEEAR+04*](Rkqkў';ऍOStNA<-V┆Q$b+*h ZVG+Rb2~~fE&7V)xj@b9 )e3£Ǫ9rIQJ._]}:׮vfx՜_<$S/>9 8fT/=}M`{Sy')?F>|cvX )8&~vV _i>6IĽ;b$d:qf^oyvdz>|}D>@ALl m.z87Ox_ φS'xv T m*ѫ+<60}Qӧ3|B~'czԋ{=(;NNOz0{wLԛ3Ё^H ~&WJ`){AhQhcجd,Nٷ^|Bu!G 92HoKm0HomH1BS7O74 `\¸`!@9 VD)  K Y; Ś {ӱ-jY X %`IJit"$’RQqVfH$ sL(6ȹQjEXyI06и4Uq6gS[s*/qpkkҒ_CR7kHJ^d˲%%H22w.ĈhQ({7q?BtGZσBހZ\["8@=AtM(;MgID3>[@aoTf#MIztLg;m%{,ClR(*%w x HC tΒ+9K(ϒQMe1oIFP&䘄G7UzZ}I(}kX y}w/s5.iw-{׸ :$1E5Wm㰢OVԫ\EvK9)bi8#+@qy'[6䌭ZZrVizX 9sZR dHP2*Ǟ!ҡ4au)F蠖X`"s\gW&ňAvs U7dٗX|. z# y&+^+Ny )[(=ӢѼy=ͯ^jv]p0H<|ltr:[oFO/Wǻ0ܳ2^ u'}'mym6^vdyϊQ^lY: b]u׮gEuZ^RG O_LRj[F]gرSSs杍GityOӏ?KpO?'YIN"Ew<׭~[+7}>b7;ܵ}-U >rqh:U,V3Us&0Gy+,6lh4=J.ZP*#ŭT<&1x_m^9ƽ+9}4Tb #)i;p(߯~?:3]0IxST+_:7'az_Y}ۤNNP^`y^TO.ɂr"dƁzwdiV8e1Ӝ{!C \H֠F Ov "P V{ `8jZfW/S7Fz6FĆ"7?]C$$LؗE}n&} 8=T9mVWg|&kS8kxѦ*8A$T)Ըl!jRS!(,$/Ͼ~+Jze|LWޜ9IkV2rҗ*bRk$rHRQbE˛w񨕣]_o637|85'|z^B ixOkZ bRĘD)*,'EY!\hPeS1od.I# 29-m49" NEї8ULډ(vd&.>v_Jt,Y fX-NˉYU4%{$-*/[UX]1I&xkB8)< B d3Vg /lD( ]߉3gŎ|\mvo^+>Jc҉yV3hiRu ս)滁]C2Cΰ~UsRI ۚ\{Eb&EbR]0ao⬷-_敮P,boQ~-⢿"+ H'$HB42 v$ #3 bdD:͵1(0 MwR[sb,JŪTL19]lUq)1* b7q6(NpZUGf\,y]`W"+ue& }uYGmL %6 lx08\{z<{vY9oȍri1AƜHޏx {rdt&͖!O>&'==2[N5`lMM!4&19qK~8$cm"{q Vhecv)#$dO1,1*/Cݚ3y !a?'݂ӶϊN욮]o\)"},m Ps*Zua1+Qxřuí篭ypk:i&%(ͶCnpݮ/zmE zr|\\oyuu03o27rtC9'Ի[t;bzE\>\n8.j{-sU[!2>_خۊ>_XO3d$ʞF6Jhꁻoہ`ĶAp$mi4Fz񘃌dc  sƛ@WjԆt!%]%KtY$\$|ܽ7q6Rs9|}Hmy~{9nv}%}lp5;wlvw.-v H  $9 )TW rOyD}c;:9={FlPIZ4lΤ&'mc29%x7#Wf?ƿ\7-y(KQ+&}LCVQ;(NZ@ 3iB ާ7o7WpԏQ8^4|-ۭfЗG-#x2JcM_]F-1g̺ k=5eSYR{H6MBۉrE؊Esy10=R)I\0ha. B8m4j'MPCjDM5?k,$ gψ +kEQ]rR@AːgЛ8 88X-n'g{8WMp%8x:$ 8!S"e,_ IGHJc$5Uw=~#*ϭZ;=)ʎ0nKC_Y+Ƈ>8f˜%¼wQjTH"hCU:ӎ&S="CCy1M)w#3nHtG-vn394}p$MOg! V13a=~1k6Ffзk>qoi .LD×ŗӑ-הV~nNfsV3(<>v0Js4r/Cdv^Vpx5k P<] )Z:Ν9\HpSs#xdO &0Fs*}~/Weܩ2CgȖg<"E\j#J9f|Oϵx.M9U5MJٓqϏgwOT#K)PB/輟Z…@R\Xd%jP0+#0ae"kYgSR*߃WaFG w%0gtH`/"&OFBd]OpF&(׫fkS jN+ ?vj[JSIjz$xWi&Wp /ة!?~ͫ{;pp(zxx6]>`c6NCO(r!O8㨥zΠdW,墢LVp:+9%rͿ+SPAW?*JB.g%O=an㽫J7LUKo5?~8KUblfw@3 Ƶ=(ß=*Nu!AX\z>~A<*kç,@x؏3oZU9_(tBSOr2E+T OtZ2bNGәld5[>6*%!ǯ*CBX;"#P%9<,|Yq@l5o^ <lA d]1Löhwv,SG{c"}L3eY/2Ɖ'u\i+V 4Bv%+RW pq7KX>rYW N4Y!ɺ gDA䇄EOoG~4h"|wdIYI T[o;Ealj~-Qv[jZCMNM?t=RWH2JuuT;uB{2uuTj*;uՕ1&XSP_~|oXr\<-*kC_3.?9ǿfta)_ゥ{8?\91d8-Wоœx2|]y[|n~,ؔ%"'+!(y.dSuLW FlSwNԏ 툴ފd g ':jRdDj>]А,@FZh.TDP:eBQVX2xj95r1]gdzrv> YrƋ۷;񄜰v3ߖ詣Fw5,ۈpVG8/jtN;{;r"굔XK) 9YҀJZ0mFnIoAwy%9k6p;7HJX@kiҀ&kR2ڈc4@N2wkyrIj=I.)6(kpN=#F(BDms71 󗻘vU<<6]籧'm*>=˷y;D]DXW>ߚz8#K8BڔH %"hX s3mf*Rn20uM# k&rg g K˨BdTT.h) eYi5+ބ 쎤x2@*1BLQ'JZ )zI:wH Ml|CB9SMp4X# $)1gLKlcdmM2Pm;h|\UQ/]1Pʄr]%1YT:0ǨQy!)pt'c$csmC^mkB[y͠n;p[ہ#WL0qܠI/TZivݕ*Si:GJ%dbԬHI`m\pDJι+U%ؐʜۦyVxigi_].-ϺiyخKzG{uq - 8!ٞ$sظ}8*E O[X1G`bW:X,q:' 7:$? 5}?^j͹0.x̓"cK&Z˥sl$)N HJQ0Ĝ?Jڮk9w_Bj@]2o;_׶}4&P ^;%c^HZӪ,2\fΙgזb<e6)[\I3 -W\b[oޜ]OYJ _j<^c,hC"q+$EY-M>%$Ho:Gҝc{Ύs>%}B,?0-K_˥+$s. BfA3h5[NLQFxQU>ڥJ^ZZŜ''/aZkTV$(`tv4ZC Bia$ԱV\ w V)j "RY2 DD 4 6|4{(h*Rm"v!E[åse&eo9 ;.hZ^?[-Lhݱ#r'kSE$7i0>>~{`??nnT{O /TpGǡ 1įNO8"oU*Rm#I|z }f :(~8:Run-X~/g=jw֛9ELcꬓM6U3 r8Ǚ%߆#?}1F#bXo_cGYQ<{ޠ])_~߿??=?~1eG޿U/8'N# h~}1cptu[Mc{k4]'|o_]fڽ)>2h{~V⧋?qt4<;^dO>tZ5T11A($lO?QQTIJgR!1À@Ӿ|ึXJtjm1[S0<;G2<;P㦃op 2(x ,^q~dǧ9_MeuTL0{*2Hbp"3&'HhНQԻ#3\V.̒}=A#ZB!Ĵ8'Y1Zh<2IRYMq 1%t)e.Bs5Gouߍxbic-2^}\Cp/$E=Nσ1`o+Aq\͗&ʫџ] W1I#u!ʔA8iWD 9?\<ӹxjsdN&Bb46d@Ds--DDoJV Gr D)K }BD 9V5H)i(jGsַt9w$ԉr6< |Y$F8BU;^ϥ{ePs\Ǡ͇Ϟq|>)r_#=W9 A q-r}k#%@IJyRP uѨ , z;^U\0CUNzUT蠂F+ pNYQ *:H0&U޵6rcٿ"4>Rm^@>%`vInikZ-ݞ{Y*ɒm=,QV]qQ!yyx/)JuM(yYɭ" `6E'.huG ;ڑt}-{:|(4qM磏/KB/b~?{̬-puiNKջcr/U+ˈb$J™\,(9@yl1X (EhGh"!ΰ&vR9* xdmr,ձזLac[+O7ZrԶx~@]KUיfSu ^v^׉Kg m6c} ]u}Dǧm;+,]?wW9tUfONݛuX’UcW{Px~5mdG}˧o3>a$6uSq" w}7WlDk/?y_!>o![_(6W1Q>6=}p"٢H\eS0Wi.J+˼V>v?S.B2 R>v 1KA0aTMbc*< <(aME.ʩcAJδTqxL &P#4Z"@'DLؽcN߿}u;~m@ ns5NȠAP+BL6ZH8igzԡ=QD5EgIN;kd4 62nu4[mMr\ҞTx,N-;J@Ю6#"$_MQ(Lnw@ m%Ga99yThIHc( D#)'&*j ep[(E)_9}wǷ<%aL\gf˷]e`@'&1@$FQ䩋L&c.{(8r YAҬ5HO!ZGp(Ӛt"8oimx m~XN~ym:R@ˏtKX$+Q3&-;gɠ\n҉qA*2YWBb<L*\ۓ(ʌRBrhaq!1H=k &N# A2 MHRXW)ƅ,X.4P\xV.8'8˞Xm|_h46oLRҘE T!1jH (#Y骜^XlЄ@ n%ݐ=(aQF3m" xɝAQT؎h(7I$WRp3Z.fmamӳvxt0Ģbh.+:/y3Fƻ;_FFRYgtgQ!ׄUC"^|.$Ǡ:*[p18*31Ǒ,#634 p Z% .Y&k0͜,ð5=Nw" ȜAF$0z4eqyͣ{)W*:_ZץrDj8[މi^g1+9ia^{^ H9" (ӽkJJTHA[d2#EZy,?uˇ0 Vp'^ t/t nr䇡cl&Sۈ !{^ͧ z⌄(+5CsnETt뙹:uGK!Vp~3K{cH޲e23_K48bsoo0s:Px42(Ԅ`4Ʉ蚎s,oH!~`5g/ۆ~16?'[9T *>XlQ F}25l1P{2-Iw.Ku!*'J0Ip$]Ih mO3'Kq[;2kNk_50u,K$G)|ҲS=1`D(#u*Hհ mRB%Rq3JJM$[9x8S R0cb)aaV{vIkgΗp}lW6ݶ"SEx-%J:뛑\£c1HAX%<'VF _ϥ"XoGR֥'x N'Q;W.D{{IHQ긦Q2D!(nS#E`(<1-g3noN>T*8^ޟ-ڕC_>{% PN0eΩІIH>P/CAI"O,m1zb1$FYcu&1ŤøGsaAPkbiJ;u8҉3Og*& Pb :*LPQQ),y@9<h/-4Y ~bzx3W0~'oS\lueEi#ʚhVˇ7N<}pNJE|%BwQI-˄Z.;]=Оxd: =lyP 1CFKX>MTT` D0(R1ye¢& ձg2̓uGb4ɖPq\]9:?뵏$:@G;g9+3 hH2@qK ng}YO-uӵ2g?J8ENQn\dTn =4os#dBs5H`9Ƙ[% J%E񉧨'`c{/Eޑ֙69pI>.qs_{w]Pݍٙ1͙!4 A= XKة5 LL> Kê2,Mla&yvF@e/;AzXRr o7h&S8 XF[++HbAd^f5AÝ+z?8k{Mk+Oh"c)ʨI# lN5gxG֊}_u5۶$+fǦTSr]/l=Wu8ۣ@)zzu\kCF14"! iў=F%մ43(A+Ag*å+tJv(U=ҕS!ʀ5 ]!\A:CW-m=]etI Xt2\BWi}IDi=ҕ|-چzB"&=Ɣzow|i^]Ohpڋ|@'>d(7G88gD8"cUaHn>|| Ukz wn'}0&W[ގs 4h}˺`2NkxkP1F?, 5#ݏ}fN,`yx{ZNfǸo *`M8b[W#` ֻ4+eNrecؐۦ)ꤑȖcR/GҏGYHL# \<{`I/M2N/C+/Ƚ jY '^ȉ>;vq2);DWX ]et2ZANWet*tEԼKtkᚮ5D2ʵs~=]#2Ĩۜ{Ct#e|:G;Bwv;s?c;oQ9i|"3oFI>8=Hɗg]oQj8r_Ь[TCDB_\{wrhYҌ!s XUFy*wHWBI0CtK Ww%etQҕTF.!`!ph UF⧧CWJ3bd  UFkt Q޻ztK'2`ѝ# ִ޻BtHI~*a׊0F^=zA ѢKoyFX:å3Z/0H o^ذKx\qaeh5\^@t%_@Wc"zTʀ% ]ev-UFiOWaDU 6KU++tњ@zzt˥:;C~pkl3AZvfWR4 aTY2Z*x.]uعkuDCzO/[ܰSԙ hfNq7?)]`ׯǜ<M&NXQIևGrOzIUyaÇ yXg;yړc/xB ¿Lp~=x/9c\QJMQ+p&28\"8O8grdLQ&#jcp :g4JR5rTVj*0ۤR6B'v2ܦ6h=:eybImx~@{Zdj# ,,{nVë:gt*#;-43 C8˷)k^jբtAtܽ}^}RC˚JdJǍ`S7T`ѡ ӞY|>\6Ytw囲*\^;JVk!+έGn.9BZb77ܕuv>^=}p; T$)ݍ4P%ʕe^Ix)v} =v]"v kkn#WEԙ5k*/STr eiJKʷT忧,@gkՒ3FwB.䨽z4@֊TU,bJ| xQ]ql ٘U:RmImձULv݆sv_PS(M,3 ~06{ڥ u[V;$NYm8=^wQ߿{5)LJb-%uŤ,_HYZy7I:e9| 2E$0@UA)H,@<-(=5A'L7(+Ps ҖU$CJBMxʼnX{L7G ƱMsxz;>6j] DdE1jEUXAP1:ju ( xcn?Lֹ:'c@v~G狕 - U}*(Wٶyvf5m$G\jV7wW/ -G>qT޿`8$,r u^ dP62!D 1sPX lڻRp#c? YS;caX'x1wi偡Mo˼5 i62X^q!6dPkA * >54]FNC{e5 6rp\Xa\Yep#6]-y*:NqB>dbA iu,Z\!E\m> dB$.=>@W󺨅*l+֔9qj䨰/*z w~<\u( b7x""j7M8!ŷhJ-\ȱpZȎ}PNHz;$JVd+GꊈREWF3T(GWC l96l:#bG/﫼j\\nĜZg7+y*.θ'\pA&k0hB4!@P"9Y+YXp(x2n^\͸C6@֙h`S*wC$_Ѹ{b6!+zƓSٛFg_.ߟ}\tct?k[Z%X>Nӷm<_­XjJƨ[*.*VfKU -e' Dwu-8&rӂ7wtAlvi9%=?v/u%;;my+߽F^|Dw}Lvvp%܎/-O ziʗYgy#*o ꖄ\w4M|!Q*n&Z}Y'tˇ#9ZPeyBX-1'N68ըa^v^J58k>!p*j +U|Y{ǶDJo#X.z&74|?v~x rk+-#l;ߟ <,~z/C nlFw@E)ɥs`?Vt9.;V _V1jS"PC"k1dMVTNCSc\MJo ?f4˷i{޶{^}X%xq8qn3jzk x<^RRĸqJB[3> Tq\&R}%(m'J6NpDh1 &`x``klU˱-~eCT+C =]T0L>sl ͷLOฝ,*iܢA/iMɈߑՒtʦ30t~`S Gw0+&PNu7հCZ/J3>Vcsnjy~柇%r.D!ٷL &z儒@.D=I<}rqL3 =51P"x! l rXxBJDDVή"b4-Ɨp"t 8޽0rzJňЉk&̨]EUP `4ٻDvn^}a}rgݥ#>{(A7Gp['AvK=@9я]o[1X]'& b=B`P3;`ҩV`サx~ޏ=}:Ũٿ׏%?vY6g7|'f`[45W#[&myrӯ^,ڵl1<k8u}uv_vK8KŢ~i[§_臷<{8{xʱf1!=}o䔯/Vg}G}}yu \Emhch3I<⦣|Xp_[pUru*mqpuzC)Ti~qw>n^ò^:|l㩕<|JO,ޙ;Ĕ䏁rom8]:%'[z7 r'y:Iɽ9J? NJ텔uhS( _,ޭMNÂ|`bA< ̇,[D׬͔X'з˽ˣ!W_2[|w_K; |(Xg?ٮ<.Yg/Mb&B|xc2æw#X*CT5Dtn @Ǎ'}O"F0h%:q2WGcCXTR";"]B;}JтXkK[d][XO?a ĬG풏b J#5܆oH>cmvN橁РE 0)Hvv׺VJL+~DDc X`bVM  9P '+R6/JOi3ۿI G^NLE"+ ]VG8!Dl6Y<;%k&-FXz7نEU-gрwTqKr1B&qDN6BP~렮E=N:s'/f42&4ǂRJP36+k.3O6d/b6Up *W!hb%X1 5i"n sjfS2#uNw'O}@%ݐ֊2V)}F.feS&rb[9g_V)*E$9ެ?{Fe/3r|ȇL:3=@7f1Ic?Ʋvҋ{Yz.YJd3@[E.YṬ{Pkiރug-ٕz) b& 2ҹLQ''T37>B9VuMNΕrbGxzNFc3|䍣H''JLJ0: ߰CQZ/y`C zeֹ5L{ {i}5.NO?plvJ-go?!9E7m8<[PzYiӴtgR }f:yG-=m+e,F JN*';{}OtÏ?}~z_(gOq8hلdߥ~[nݚk@l7/o ?}vh~/_nAYOO7_5'9.b~df_Aq_MRvG?ow,Wb;"В+dHlw9[S %ԕf0ywPgM/1ڊ#@'F\xeC;y0+Zϔ^SGvAMpCZ6񀟨"MKƵVHhPũ{ݑ>WTvHfY\[!ŴD;'X9 1ᑑH*$&sn3\+/y uP24e8[ҔT.^qE(W9`zx++O<~Z>7BULxD T3[IJŸ"Ix*xjx2B!1jshED@Ew=- UHTJ[L@`H"8X ;Uh398/5M:V~.O'*re> Ak \G',QxBoL&;o7T'<MVXʉmoɿacRKv4*\eE2[oU(%cרB)9UL_Ю~b|%8JYReRd[9 SVc~X!ERF+(HX9 i(AD )oե["g6g2/÷[Ro֏ĂK'\}:a׏* \2^gR|tq,8s C9s!c'Ges<g.[kԡ;sh%]W-ܫq4'i j\ \ek8tVjU•\*qDpz4pGW8tVR5•Tti=]`˹_:ӡ\M?N;w=0$D ~Jx?_:xR4`oߡN<$$oonw{pl?n>VudPhBxxKovjқ hЋ6`pɃ0y5Lg`+j%7pъ|)RN(>ɹFe]v{ L7B1ɕk?{c;W%j4 Q*5"V!!͞&*@;=Yėʓn̟HR gp:TSV8pi{vO yFOy=VJ}`ONOnGO\e8},pֲ}GWJ \Bb@GWhp4p͕Gî:\1`W8C+UoP&cu4FGyX79"qE<:_-r6;UG_1HAx%< fWt^kc0[$|KNvovh!"O>?N3)p~o7t6O Md-ɏhP2Q16Ee,UP%g&4^pu[Y"cJ#a/ul ^xo&{Wcp4rË$P*@EHI" Tt$PY$$PEHI" T$$P*@EHI" T$$P*@ũ+@EHI" T$^N5Y־# uDh07ql-CoV (ͯ1Y"Z`V2fE(`V̊YQ0+ fEl(ÖqIHI" T$$kgGFpo$ oƵ :U2*Me]zQi P}$=@>[=&=زr~K[g 2(8wzZ3.ӁJ:ňἶ)%t0o1΅HU$2A3⩣9O̡M1%y"BW+}Ha*p}juekk144' O> | iо&M/Wߎ-Kӄ[!+D[puiNz'z |TA/#rHdp.R!;owN W'|2]%1kL g$iWθ6֤Dt819Ob.䙋\&cn C "T,HȶY4e'b\pJ'Sh ׶y|a=ЋûXqU՘ݓR\{c `KV[ABA0RA Dᕐ(- $ZEFU !9G8e=k `c)$R*&5sGJkX MX  Wš77$ǚ\0sm/yIz^w4~ምdŜ-jtB9a&QHq* _#:4!'nΞ pg6'!(fJmDx>ۖ9#Òy(86 ]7t8*qnTP\lu6^y Iy# ]η{, wZʳNSJkB"Gi/8ƬH IѩʶҚ9aUf`<Dlm|l "Dl"&QB@GP4-J,&K2q9,G5M#M 9u85I*q걔q5D%UD@D DPwIVpIv7ؽN W[753-_5IٔlJD%A2HfO$%6'%bi1g;Jaظ MN2FͨFz `C9uJ-i(;0駖L?< Ƈu[+$tyFDAzS胡\J"gl'Fh2O?3/n=3 KC"<X*\+a%MULI8B"!DǂT,y]7n+ q"3#z FOQV]jdQ  o=.ZǙt[6*hٳ]G!mJDv:ݕ/i׍//1 X#ib`$ey Fc)%ɹ)$=y%0ΔCs%(W(EhR(ݝKbr2J䵋,I&[z;yL 3* kv _B*K/ӕ~I _^55{e$GCŭqaM..#[o3iʹwC ue6ޮut]y<SjGz=g&W&ɕܶoQeSשMnM60ݜn I\+^v/wU&\n+f#h-E7OL{HEF]Yׅ ,[ kqkA)<|)-'FdC1>nr;t*|{61rbə6מzLdHJkdu:5ҥ ;/-gD+*#'D3G (qH| IDT <`GC*3‭/VA-m=3{yRFw^ v_|]RxVͯ7>EUjM./M,ϥ;#ILZW LRIbL|#P|u-x^ |=X$E` mZ\ir%#p<2ɑaV-ểUȓk|?E@4+}V+{~]\Uċoj|PW!h){jMWym/o$O7C 8Kla\WY}կ2N._-/u ,۬*kY|O |Ȫ@`DYˠ1zdtj4Wv>vVX Kl27=P5bE@ͭ$G!@X8X|-jYqX %`I˼);(Id`(lT)` 38^9&BTQ( Ȣ`;X+\$JH!}_UM~u:=/xi뛻'4wmoDi+Z%yPn?}(gu1BORq>%׃Oe*vNpn\=u1V;`zG%\.WI[NN\"R@aUR|UҖSe3qWWdv]1+Ű,Rbz$̿=N9w 婂FwwLXǽ1 KjeRVؕ. /Wo_jRXތ(#E\d7/mMz3g<ۻlv{? Edz 847?Uy1ݸ;03C7^,ey^:Z-Ck j?ZHkILXy%aLzYJ |eiG~X ),,xQ[vnø|cQePRb m엯g_/~ e/=!;21#C5b}Tǽ{̫4_ sQd BI M,S$(58`3HL\0b1",DD 2hA #(Yy$RD{ԛ:ń%`qWJ/ Qݾb'1X^EGUzu?\ }@[#)[8eG2 8b@ÎMxfōzyiCWGKk%)AH*,tѱ "4)i <7Iz-sC8j虑wrk'`(+.ka2FY(~`yX:v V8M_nUjѲg>C"C۔ډmٍva/i׍/"U]p?4 ea-wZ\8벚O9o34nOԯY?}—ogw[ee/TFf´YN=T2e6>{;Pq|xօE/ױ?sJ(ڑ޿GOmr0[}X:.:r,&un^aO _Ob_x䷭6Bw[1Al)~bDVhb6+o:{22\_aE7 ZׂR󥄶2%ODI*ِwOy(`$^cX_k]E$Pu{U2L7Ih;#O=˼1ҁ8N9v3=tTk)Ll w-~&da N=ӇRt;ĬW̊Y@SJxlkj Ok9֟O7옉/,S} ly Ci]faK߮!mz6?OOg-aXu[ n6Ŋm0˖;['=[jXՃRq^td @}4JQ&aI-?++n)>uc`bҹV:F jT/S.eWx IUp;p*ƌƁZObĜEKu9ZIY+0\\gHʕ\(SiiQ3}|*L&pTQy"kzwq#)6#Bw&zG{Qnu!9^jJZ_['ךiqS'sGgʞ۸&#nt൳T%Uxu"Z[DEl3@w@ga8Hy΀V[s&C'>CV);>L/k9v{xG!րQD0 &ND(UX|_=m3XUCKP0٘ LN bBI[ ED*Y T_BA,0IFU7}YԆH0$CƦwGJӥc(9doWzz@Bs3,7}=EN6%57n-Cr*9 m `fvALWzԗk龘Jb"* =R} F@4vo^dm >_y3Z}EO 4Q@zlR&e٤lRݑMk<^fr>j֋FTor٣^6VV m( &2Re^+"%G01IQ⅕3oUX6efrgl7bROXk?5ǵRrya&"f.Ze;c"EKj;TEX(A-X$岌V.DmTJ$mc4()h7g* z;$;ϱfRS7|M4B^SC[baP{4V:eģ9Xw~CX 7&nV8)*M C\;\;]\,B{11(@/LɸB`Q#HJy;I:"Cr4DM^$V]*J볝_|ocyu.BQQHA1&d$]*LT,z[ n;Go_92MkG}%(Fo̾qꖳS[(ɖ͛Qe<>|`vIkL^gm&+V!rs%De͒ɰ)ZF)?-4Ҩt4[`4參(27jL7 :vv0g6,˭VE1+:o5kyUH+`$NZ-(>X8@JƪǏ4^>X ccTfcJ}c 2j3xAK՟'O{@qe 5s"ւ+F`k31V!#NGKi uX7KJme&܂mmF(tiy䭛co><`QMfrȓLЅGt)Mc$aC!QAfѦ>sc!C6mDKʟ pQj'.D!\SI1OIIJ }H5m@(a9>v}j)2[_E=|"׏2Vq;њb(7&)Wlr~}n}v2Ugu=}QM!`œB2i Kk%&BŢepEU݅V-WgX8,āϬIߕXTarF1C18ϳG.'Sג-wy;X1&פf2Gҳ1P=!57u&i-u<$dJ(?[ϫ0gAR"]|7\.yBN]Eyz\J.g1~|98Vgg?p]sQ܆~ou'q^,;«YK]sG7 AZ b~:Kx2y-UYJqEU$Б(-9oQ>!+O,6TDtJ{B,dިB?7qH'$L{`Ks꾗P 1I*3; 33;DŹ@ y(g! 5KP\BMv܃xXdzI$k#ՠnb k:k`jCPԲ kp~HC~|_Ci4lx`l~&͇>ց(y JDЮ1j\F5fJ6#ʃ+D!%݋GY7J* =bRqlQ4#`HhUދA&թ 9ElQ;sNHRPf)!&,I*cAPcK3rv<٨(JI[Gv(/'!GRZ!5τV#a)B)PY" 2%+H,@US)$f[ﭬߥL*ʘM`!fL`S>v[+r j>>oh6ɈTgrLjYTcieLy~aպ|oZ3ON[kmg[Zx`5Yn?O*/T!OFd"Ɂ0hRFۋT3:%xhXXk,C*l0%ɗb$5TXIu;#gUatagq_]BGՅKئ:{M~Q?yޣ'BCd+Ω\s g/OM`sBE:(,FKxM8(y!d0.{!(d5 $Pm'<e0)1w +rk0k&hθP6ZG8P ҺMd6O!IPU̔z.1ة>,-W:/*3IweZ $":և]F"zU;{jD)ՈzkuQhVz!d,d$m B!IXѴln0lTA&ms-8EOgB ћ첓-ie@HU];uF8g^zq̅:;Eձ^^F$ऩatj)j%"BJ.@* aR_#V ^|x08+>Cu?}*lvw[4{N| ΄9sh}wp띵Y)=Eayp5t>Lٻ6$W Eq!``?|F:QBRU )R˘lRМaOOwUu=UP/5eDDL ` Xy$RD3clָffWв|mn{cznºa TkI<'M08*dFhn!uDĂRlg,m_הś;2b(R$]M@?^9GKe.;z,c$6W\۰Mo>Zp44:˃BX ϠFm%\6rf<)9OME)<=fUBYj%5Ө tPPV{Mi36/`r:~6ihL 0cg\ tS_3<@ `rHRf;#VUo{lߧ;%-e˙P4H*1gA kT֩PF&.4LUʨd?<8hF/DտM#?oM&j{EКjdӌTE嬋™!B{y ۮVŨT./aM5! ӇKGMMf{'st)䗪(+G)o.Me}6v@pĢyF6w>6EΫף6 &OOvnT(kFqy?o_~9Z#,%pOhL6ni/`}6j>DfoꋎG+#ӅMq~rzQ%X&])q>TЛoc@0)>RSʾT% Ԭy}dW(.fbSJ$|[V]73(u/,٣dp9K䃔6]#aL66֛ZZT| 9/CGCQ U\z᥌gl8 Z*RZQ)$kAMHrT(M(QmGstdXDrGkB%RHDcHl%^Ч&vJtCФNwމQr6K磄=RB(Ĵ "0_9;5ND~Jٖz48cX;RE 45w*VѩeN"| !9kft)𲶗C3N76l8fmM& ΋noi2 ujM uv얦Uӣi&!ƮMF*4SWjr݀\Lj}]p'[߳ nͦA;NZ7ixhwezpfyHۛ]\FhUtKi\d=x s9{٭}+ es 12ӛ!v :4 ȭl=fJ|55 Yhz#Dk[+q :)dݏ0b$hp s5ck DawER=>py57d.IG rH# be}0zSA0+Cʘtf잭;=j>~߃SV7]'|eWDyFhٍw6mXbͪczn!(Rr$}&B#J4h:"bA1Q 53Y5sUh4BkM̨3hQ9"OJBԀRELf ֝zz{\3d Mq5hz\Q#!K%>54fl=ʒ(Җ`r%325B*\J[:m@JNPԗdqFZiBhC0HY\i$Ec#{Rs1'b;TJyWzt|}(BdJ0QT4HcdC!P3#6Pq*o~`Hj O> ̂HB9UaaR"r՝ލ4F4\69ىvn9iڹS H-G,zbVy$e@bZH) ɂRC`U `̙`r[ln*la6̶P[xP['8MbЦɧNMV~{CmT[($$ 0QH6^ݨn ^QgEHT/ac:uA*b5Wm*_wJf=t=`fvCE?-ӗ&o쳓р0-yWpavNI7WM"Rջ~>Xi֝R%)3"5SDcp*$% XZYh65K{ek}dsÚOKOYSXmޞZvmNcT))_y6)Ҽb>z`օs@ ct|3:@jFvJÃ&V޹P﫽V7ē]Hfayy $`k U *+t@Hm'g)Oc.*X8xn>cp֝4.b*WA}7]n ` >-hZ!?9!<_ 7v~ jԷsj88\syU"˲}JOK N8|8wƝZ|2}R=K%?i ]_ɫ"Xi0#@uclc3S}0S,Uު)fFZ넛M0&5Ϙ1, t]w^NA+I{Ok(>,V,`5X=KkM|zyͻM:Wgb;&n>p͜)WiI~3m8۝Mvh49w&څߝ8{lT`rn^B1? V8M˒ ߇3N.J4ufoWZ¹?R>%(g7{);4.NS?q aCYSSh"o( WxyCw |5Mi i|9Q z_Iu4I,x`P ՜T!Sw(RT]_ VD//:<8e:gǠ"Tݑ\E4IE$ETҕ*!Ty*V%7Ӌ>yFzR9q ̖H+EǸ+sR:`k搮- O+ʠy8Y+XO&+=3syzjg>NLj9+j$yVik"Ru}6bt3])R7l'H=VrHHTivs ;ũ&fn> av h%VjN ~k"DyS*<6~UU ivt&Tkh9KyEަD)QbH׀_`5(˹qE-ORbۃ XR_m~i|md|/(>jח!֥ ʹyf?ٻ޶r$Wٍ*݋ 3蝞FFr2YՕ-}mYDAw5Y$O:zs|s7L_RJޟc0dà+ujiNdbr0\Asi qB/94F . D SCEr8tIX@ t}ŧmJTmK89>^\.zn{wN/K,V8oYh -v^!Sj:U2plDI="< "4]x {`x`Z%Sr71q=8 ++9֒"bR.\7>Y,G@~fz|j} O6Փw݆wg$o}/#c4*Lܒ *Xl]$ r $ĦLQ|'xq|v>gz!Y0Q[+{g"b#YƗRSQ7GqN޿R!3#Njz kVh#dkHC-&Ǭލ@;B{F?`7ŀͮ?  =$ԣH؞q|X@y#/̀~ۄ|1+q"(MPq`Ue5Jv V* {,j9¯"zyJKbh_Tn]dˬNpf>x>]?­#YGّ߼߬ kkڵVCs?,Újӧ>?9NoMkL7_cfjlڼ]yʡƍ_Oc[Y;}Efd |KF 2x=m }C- w.r+C,{ k6/=e[JeZi=)mݻXeӹDr/0ݤO'\0I+0}w~x|&yZfMC(cauӊ /5:E|EE/>;L|X~>?€+c(.b%@A?X1 #,CtŒ ]Cu7DE6m4i 5 w5[TkC0kZ|ѢhZKhSoܩxؗ͹іJyz ?﫚ώ/O,S=3oП Hԟ(~uyl@7y.bv'uw>H} ey zXx՘<3޾J 5q$|laģug&M࿡J_ ]\mj,`VT5DyNQl`:F,:׾~+ƺ?H}|?9R\<}62sIr:C8Qƀ%s $t;g5rbܷ zMS}7ˏR|4ذ9%6*6"/yz)taR8̣'kYp颖)0 uS%,0uhj%%O)0.A B; b*Q& g+y5NU%|5lֲg<8ِku֑<*iah^%BUF B VE|&X1eEI3#*;%+&TuY{ݺˢ9[9o:KazD~H~7.&>|b DU W6VYuPŇ*=U`34;leG&.|\pdp1(kǚ=dVFg {KKI*(>ш+~mi]c)eJJZR򕣱XM|EC僫C u|X ZϮ!!Utʶ W#!WW%m4ݒrcc:|6K6+YИbҥX[PL"rL4e]V:SsWNj8?Wleq'Йu> Sog{7- a*SBXNg ~l1ge;q~[I=˔[Iv M$3LLbI}ݪ8W'o;-;8Z$.5|6XB]?z0oŻ4IWˋuY+Ƴ@qMt'Ə1]e˅Z~;YSv{$L۾e+<{{ ;|=m|Jԋ'ٲlKjYxk*TzEFKDԲ-P΃/i*-EuN3s;-*DƩD4v_A9qCSZ;Whe]- tfw 1Tix2Q8Qz-7ܽ 3Qmԧ"a6)Lg>Q>TA4U!08TgQ%=yrpn_ SE}SwVVSRzO!-r- sEx (JX0>MŨ4c@cj#&ud*88p+=pyzdB+yw&r(=/K×m3^=]eI`v$fWZЫUTP7ͭ?仕vc]Q4>b}zo &w&ţY L?S)HtjOo>nW+Up"X~30c˰D> Ԁ2EgIy58}|}WS0⺵7\ iARF~ZE*Q_1/kF6l^^ePuPjN06jJPvx}/!CDƒ2zOCv>[jiW!Oe&T"x+e3B[>+O`➿90:\.SdUu5lrX)VԬl5Xl6nKהڿ`ճ>F`9(q?oPe:/ nNj5#ȴrtQ} : 0/?ٵT1ιEww [0{ݳnݾkm~ܓh&ᓨrWrH(]\Kyܽ7;M@Lk~\D=Su,Wo]*FߛVF_"W] Z] s`*X0jp5.I4ŖKR2ґH~p}]F>?GLru).jౚ쩸MUN&2`M-&fj9׍w9Zbh>Z)ĐA6Ŋm1ZH idldepd?_;%#3PhrZR9bx$*dۊf 5oI$2:BC!bPج*+C Zf"[I ej# W=Ơͺ>+nL$dȏ댉P0 tjC !Ju/F%̷|;e&*)_,K-tl:nL OaFp>i;}&'z|eM}KPmZI߾S(vi0G>;@G**Zt)\]0h)# LM[8gg찰"R :B/QHJtyLH2Ӳ 郕EJ4O燅Uc4h=ɋ " du&#)CEX|T/#mh(S`Ŏj?jxF(N6GDCª9GuЭ8\- F'0)@3"mQqd*ΨFbv"n0̹+ c:¡gQBDYXs_0".[mtԞEwf5*)8L2ZͫFjSrVY݋Y]r),BKFMw BHd_kۤy !kC:k:8,8Hc^y^˭g:']y$md6-kkZftf @][tqE覫2k&&SjsT`ۖ?QXnh{y.j H˪"FCo #4ã=75g@X$V:)R(]&x*D=B! *|0z;)WЃmV#dX7klU>^.W "D W}\!O.!\1v.E^gd UMIAs-c|gAp2*4o0*R3QWqZ#ӆ D PX?˙,+Z 5 у ~4";|0G |SdF)l2qJ]1J ˎ| .2 懅9mFِClM_.u ]v)&x,li+ 7!#!2 .Xrl)`@$xiҲ5Hh;g[&B5?®z?L,n$zHBuHme,_s#:,L# @rzϋFjڒN(} 2"J f7{櫵y<_y>UkSB .:H H+# tmV6Pvd Y8@o`:{{^ $ͻ`ݩϧxγG2-Q&6Y3@k*'%"=kVS&.zV;d`9#a% u[BPt#EG(:BPt#EG(:BPt#EG(:BPt#EG(:BPt#EG(:BPt#^k.O E) q=SOEN♃GAJAJJC<jMi ok@WN!׮Vo>].T{!"\;ݷ1+.;msJmgZƴFi{esW5B0EP,y7莋oz9t?x~ŲF Lߞ~2߆rvG%|w9eK׊s DF.& b&xAb5J x(%ӵ9#_4~Pj۹a|)ڵ0_izea\ÖzHu1&|f9/PO̅;+jN\z=տ-Swӷg?ڻ75SX+ɻvnۘ_m ЍRpq~9a4i'&{`]K:tyj=5=h.'o^^nDVro,cH6[kii+H cx&kmLD3Uuqz<8gq0/fE߭מZzϓ|Oװn/ssa]59̪0߈&nyuwS^z5Qa:˼~Kax/m!6DŽs cEx5>h{YrG464tM1Vmf;Llөtv N ڏvа\V^ʶ?yE(TBj jb2s|śzx: {NRbiѢedwyl 2i!^D,FD{ElfAՄ[cJޏQcKVSN SYdt m?ιm>|n{ xpmr89z;j:8V$[<&é FHAWFCV.dqPą;Uͤs1Ie#~}7ݝ<7qW1hk{S oܴ,ݍBw7=Һ'Sm_TRyJ`*GKS ,*E%XTJ`Q ,*E%XTJ`Q ,*E%XTJ`Q ,*E%XTJ`Q ,*E%XTJ`Q ,*E%X_. \i͑v2h)T;:d35Z~ZWk]1uh{#% 63LI䈰flwrd>©DKKVjp Ȩ3&e} _r;^yZȭgMY4MV߸sSmAޝ7:ͣ;>7;;yEۺo wGwnׇ ]]݂^ͣ{U1=Wh8r6n)Wbj^}խCߍ|_=&]I}Fϕ+.A#ޠh;xF!hJlP*A#FFi P/?J9ffa؋eR6+"Jf"..5H*@)6Gtm~UbQU2 inŤ25Yf\Qi-Z ܄lf묓x&Caݪ2Dr;ˮuHL5ġ萳̊yui{ Nn'cF`*<[lݭgtvgUabv}r}"UKAW&Qa&C x_}[2>ZNr00k:tX9o^iY`stRS=`܀RI7amߠ9^Uȑ?&V-${UUqtqEr}93'%4 6zD~S?xx1Oy osIfW#/E?ʲ[}Q+vgi%ntyDQ2f>ځ=:v+͉юююююююююююююююююююююююFkov 5ݠ8\:P|w*%ϗz& @4sΌNwDi6N@ vS<A F{ߘ1xtE+\N%;UL -x8̿7>?RG*C~ѹߝTxI` (+Imްªe)t\+DGNy B{שּׁy orq6+&~\|,+5q]Y>ͽSG ]5p%\XnGG7F'f:}3]N7;5z|#~o]Agą;?2[p7ӰOjM>o1QH*AT[޽glgsSs҈Ns?>'H9AJ+ksrBH9zf:@sM@1F%Do(Qz@1fEPY̥lP\8W,SܨCETN]A$%zmm#_A%ـUqSdsd%.\%Ibek ERQD+PrL==3==}aZV q5baiXjKQCT_p`kW/̀'{A jZØ헡q֖;% n hD9-tGc8 =:ph"N0t C,jp3ޥHXbRX{2$,wXG|l[@NlE5t #m'KNDҫhA#YWO޸K=F[F5Ij3ot1# Ƈu[+$q``#" )eP.∁J"glPOФ<t~;S+z:$^IAX*/*YXISB~% !FTXcADxUL~:ӎIqDgO-đbs}FOQV$.ka2FY( 0Vqf,5qwL_ìE 6?P2tRRڶn&#ْԆO \t<ڞ>s1"b$-m`21IRcļ>RT`N:1~1^_k(DQ7U)"CQ`;#9> 'e*ӫ]d1HPEͅzUy p&FٹazBz.}?WWQ[Y:ŴUViل5z "*ALif\\DLfX7Fѱ 5XOC&C0|!L)s} @vvOup%jnkrԐlXp7`͙n6N [Ob_xyů:B7][P[f,{H qz>L"~z\H"MN P.?_@ )$@ldUߪ4@vdD=/[k=/2m%[i3)+T|TR>G/8_dUaatx"LgJݦ(1Fɳ7~`~~i{n|q]7Hƫ&na<_~*2E0 RII2~Z0Fe/vxW"a x*S )k[k *}tI#iъ7RKD QA7[Yke=;#=*_e;l {;J;y1;0Ni;fQxʗ(; T hN4"Qi,5DIS-zu\S~8m䅢Q*Ec΄7( @a]ɋÔXmn9#D5/V1ڷ@+edO~]O_:ILv_j?ߵA`XK0Xԏ08>jb`Qʊ1jbj˨jgoKOi``ۀ)Ղ[΄y#rj~`FİK{tm>/.<5GMmrY>NF{ɱ6$nϵ,wJÃ&ΰyKZ #?@%[ȧr)+t ŷ͈cxteW5MjQکTORFEy*UM~]w䲴C)$EZ\!)rEq- ?;ćr~ѡK{VfrTF-%W(Ң2NhOHaIG5q"n6S:w;'E1rey'LfaJe}xTܵ}uF_$S^tE^rÔ$".# .xD-mu+¥L)/J;fJc) kY˼: /1n 8>3b48bڤy6cpNGn ,;8M|B|!B4Tx{jew*uXE0Lqj*GȦ0-d4&jciw2qa7_ϮGc191SL٠Uh@ʈ0`ep)Hcip)\y!K#)Xm%gP|"w X? +PY&mxLrJSs{EwSdwzt[|]vMuא]wq> Ov4GҴ^Ҵj,[AE#e}6`g٭<x? ?dJLt4~Rb*p񟋏 J>OũXMLhښ%(5jչx'ǩtV:#IrbY + G8_^e/FeOP"g "a>8B/zer!UGm:Y~Zr_G VBĜNチ1I5ZRhr,ԍ/`?sԡN.-ҁ&e).6^]׻-ݽ"/˅s8ŧ SJyȪŷ_bQ~xy6-Xli6)'t4+  {wYYrq˚^Ԙ]/KTм<]خ?[z/XjK[,3 * n;fpߙ-ϒa-[7 :/`~q;O-}hf&DТ]N19X6מzP5.@%:=æ#JR-=̝Ң"`1GqH| IDT C eQl}QEĖPƽu)֔yBFز;5tt/3ha?,^};ҧvp.K]OtsyP;t])VD>Дʁ4L[1ܟg=-rhqRSpVRD/Tr|ͩz;+_OnFg|xժrͱ\"sa>Fq88;/ז꛵]q~9{h0zIԏ.!h\;*Fa4.d`ߎ.YqgcC@ a; _K`Q8s9g|o޼N~u~ys뷧W_:}+8u>%ZQAЇZ54*`hڜuMƥmNf* vgk@~3y۟ttsjޤef3 l~Q f@X}+w R!SE^bn`c9ﴑR? |SgKT>1(N4 B}{5?<2K@V;yjI|Q묟k1:̕+Dj̍P\HzD26&"'UJ"B^vFc{G9/Xc," e-'ЙLj5 Fb0`Zː0#jFF{^Vkorh%" #6ՏAy1o=ޝ~5NcUXb*[no;" /AΕ-土Lၱ?{Fr_rH~0{ H[o/1~TS(R&)!U )P5)y]Y4kzUS 8Wx%043p: 1 P赴FD(RjÙaŔ Vl^u[,\k9~=}be:PY (utGӪ$L~/?gȱM2iwڍp>IFs+ О #/ #0<0( (*tTQG<8\и jM$ј\0Fy͈sv)-hI[g5t >E_Uz]~_uU]ڨ"6XߖrY<ĔZVaůR T"b6ᇳQK4}Bp SaNeu _ICGW]Z}-B蛲uz5:[}2Z]^J_֒ ]_/Yxh+_U*Fz!Rx-20H^za.'нx]= ֋o-Ё9!:2D (QD="h%TIn;Nw%gwyaAzrX zٖ-N2=e2_FrϨ>m :QI )3F"|9YCǺ<mͶXA4@<%&xs3WJhMØe^\ibA6IQ) ZZ$ qj|hZۜƚ(9D1%b[i-L d<^A|O|f4zu1ȅ&Oa9~ЗroN1p&Z\"l"*xEF,2ʨe' ta/!r-ƝbCG h`+H I+vQWac)A2$$VG>Nܯ; 9+?/SD|/HՕuL~oϓbAÇWJ3$n4&%InSHO$#dzQ\b?_0EWkQ񒈜^]O$BIxz%KߪMR=VuEk+LR fU%yZmW & +JR7TJl-@5e[Q !SV|+x^QdzG"}!x-g{gQyNMPu}c7\YףlgoX)Z{f .8=jAqh1ӹ\M_jPw۴ghPhaQ"b8&nykL+11ӒD͆z’ģe]o63цs_5R.8|! 6\ *y@3X+J%OcF;IZ6ʜA9[y:=O?[~h!;P<݈E)!s-ApSG!w@IFebhNeh91UzYCGՐ5ة!j7^۩Vvu if(Bw t/SB#d/.xyQz\GᤅsVLPޒTC4')yAռXh׊gk["`u9\Y%.`rl2\Ov^>okCB N0R<&<8u!]J|deH4ڤ NZbhJ hhd:Q>pi:WNȅ]76 KKPh**)ZE{5v Ar<;%HPTЎXHqzoC,D`,&&镦NA/I(~PUhBk$Apj5J5x 9$όN% $#[:..wW`ӛQ11Pʄ9Wq?EԍxZ]u5BE^, 8xmzx vW+uU+ jߙ]OFJ zWIFwKz8v] :=4n4^z5A< ^=!q#<6q2C(>̊oѲ6^\ps_ݫ b*r*K10!txcȝVi]ڋ]f-w_/!/vr Dh,@M@ omTDɨhesGI5X}#л=\z#Co8ΐ5è7qE,-q~ԂuKWo먡VErן+> a,υ|-Ys{ y]I{v C-4o?9U0sv*m@$&"Y# A-О6H4SHE_P ݟ.5Vj[~QTz^z|>m}܃q'M^ GS %i%C'\.҅S|ű7]ox,+5 K!Im,̬P)7{%<8Q2 F1PFu?5޺'*X4<`ZKІi]#r"CԀÍ Ihf"极*u}AHY|[F 5%NIZӐ|ā3A4^0IU |!;VF=h<?8Ͽ8᧏gٻFn$W231. G (b"vaf'>٭iIODuQ$U$nK"ϴF`Bxz}5~1x[<z4=ohi`Gk乲.?gj2,zI௷<\NfqaBP۟nלIuA\|e0+?s ¼_Rݡ ױC:n@CƸjt_qP57'n(qe|40\$ˇ+, ci?`IG2 % 217}༪^>a%'=mґZHحL%9MBJzC _4bU&JM.*5{`k^|w^R׫z0!SMW~ 4@&+ôK*͙@)Y{$¡jkj0KɸF/٨"qwޫA tv%DJ=֎*܍T&NZ.w)a=P>j'y/*/y?to>DM5[pmFDT:`AA+-#-:/|2d'kt`"s-A חe6A/\{4{S@=S~ηz28TQ{Z('޾9}íf^ Ro+Tȕu `=U^J.z7pk8L1#Rm,5h2IBC+R-gLH1gFue;92KQ/K}ҝ&#X:,㪿cxTͱ׎AGΪW8m@n4)] @s ʛ^Ϟ` #)؊)ɠLL;@jpBGO:P9VDA102zF5CLɥUoM)z༷ qY39r%nX3l Ά/3n>?.pn /+9tu/d1kϜv|f)My2MU [fQ Mw͛L6iԞg\ mRm.Ս+K`ߐ-@wWtwrw5{}4 .y!үͺA;n}/7>ϝA+whŧ}7̏y3лгptzz+A׸.Esŭ>`nMsHU\MaP U˭͏d+&m6ws{>N!FMehv2xCm=0bu+lր+-UE־!{ƽE T͢a5˨5H@ )Kn|L.ˈ8e.DrF&n"Ӂ2 v{6HA;nם}B}j!VvnvϊHV}pḱQ nQD%v0-֥$\\+8s7Y,Z]L*8ll0! U5])8/'/`=?뫷YO3Sy,dYjRtfb5 \]Z1&^3\Sk&LDԆRIřpMl|&{/>/g6RW\qebč}//.vš=_;aكW3R%)1C̺d \b\ff<BȒB|(zߕ}W\-azS5ihRC֍} EBlLjO Yμ,%3 |:[LU0Z!l$x)&' <-RIckGd&4~jz;4~Ն4Nid>5M-c_XG2[Z8hV!y^:U@ɞA&d|eFm0ǘgF9L̒ L0 c&)M@Z:L$436eflNWi ]6̅½r=mь\0pCOI}zg7Tjr2;묵H%F2\Ȅ*E+lhL\ѣ YטEoh`{!EMk(lL36 H$Y@tE٦~4+&f[X1h k۞{ v+oeAB 1bD|Hu4He 2AٻlYV]F*pRR kb*)0AJ|*@FuB״ Kcp^ڨ_*FlL>vdDΚey{=#\5' ん.l8N'0<Q!Fw$9X$5?oJ 3F"I6k䒖ȅg|@IsɄ@9CIABw/N!|m:lLJvE0/{^\ h" :(yƒa1cI-ɫ$CJ-^lؔtʇa>(<x Es띵mÀG+ATҘ4n*QkPhm23g U*V@>h\tOܶҴҶҶ#6=вU2*#R 'e҃K.ա@Xcg \hl42m4.(O0؂ni Fףa ,V֏PjaLoqxm`U!E;97R+7W[+Kz$%Ճ06;y+sǛK6_hkbmʨCCV"됪FeSZmWTV^U+()j(JrsxըT}r5u1zŒY"^LSJ^0ys}5[#M;>`x8g֠F MF=њ4%`eK[=6DkQhz;j%XhZizvoOӻL=Gdv `:CW3 ]B J=] ] @`>vb-"UA)zJPu3tU* ]tUPZ + %\kBWV v*(׮Nw `Uk;sNh%oUArЕ.$\u,p ]Z[]S+]`Ugֲ֟]3xteVFvVݡ;~WŶĖE2tu R2G2)iL|p&h$v]#D=%^Iݏdh L'#DԼx>>J,4W. g\qZZh 4ZWT[޺7~iX!!m,48PѢ8 ӞqJ~z/h2%w)sxUg$cḠT* @5\KԖ*-4$T.͓SVFy@m|߾bTm'DgUf.=|&m4z 6OPi8B;i5 (|tN/IWY$A3beQAN%+WM^ 'y/|oC\άfV~+pVW\ti. U3mbfwErRJWrV%Y*1 >6[>9cխjvۡ=Kv(AXoqc]s)4t3tU**hpJ`+]/FULW誠׮˱+I)e񷝮 \UtUPj x3gW-ȶUAЕj!*؏\"\`骠T ҕL0]c=vVh%7mrmKOWCW(.pA qd(h J`=] ]yA, ]ؙ7%+'mR HW]r--tV$ %k_q*]"PkL9㊫g#Pņg $W{ .cmV#i[T]5؂EOӻN=D;DW"ũC+ZOW%螮N+ "NgRtut%r`+,XwWUAtUP"tut|V9@gռ+tUl;]UɞNLJ";]ZԪtE(-)ҕ֖u `;CW`NW%ʞN`]2 `ݝt樝JNWeJtu2poJ p.BWj HW%Jt\c; N0XZWQ.ary =]E.;3>ph3ҥmpQ\KA_d%EA J{jUjmrhZzuf%jkmO[)L_5綒SMWj>n%ܴӖYjf2kͲkavpgkb;s60VOT *,Xhnob98ݸؔ6-?+򓿏b-?/bLf}hwCj]]bӈQ=\-r7]{A㜑L}OoWsٟn(.g}l&/y&Hc"*ò Xd* i٠R2ŗ{%B9H#eEIf5_oruG(tE̟>MXE\;',耳P?{Ƒ 8`]- X%$Y,aT,)R!)!g4"i6l3==U_UWXM cЖ}+C,G"iTZN@-") LlLNa^F{q7B8תA쬏/KF\aSV*{r|ћ~vCUGC~]vU.Y?7BI7L#ff(۴QC4W+/f]tIV 1/R,F+93Qq"3m֊vc`H&`Zf|#%L(:vFxEղFvYYANr<DddznBSr)'x`8ޅE*s e\")6i TC`8XЌH#Ys s %Sd fDu[踨|\ՌQ$c$c,je)@8l i!SFR630L&(*oNhO2vL2V> B r0ì w8#rPi8g&}__~W׍;nnw2B?N(~߰c  OR^ BSS6}ԻVԫnLqw|w6#=2^]\t~C]#R\3FJS)+Fqo a7-F'^t>jZKzM!=D3xFҸ{]a=S;F_3i} *~B y;Ls {lO.=1Joz^߇1 Wyћ{iGj_.1TǦ<ݣ9qϵQ;mdSo7ѠbJG0_*)l1G|\by>Y%3 A &:]\%IYKl`৾>vAcӊK[?rwf~/˴pJlp}e]2oiG%Ҫ-vE ={_w඿Zs3ۭj\)y{I庹ӖpxNvS9:i9 C` nEW^oyU=۴ҏ__wGgy fn_ yok֊Z7:ҲL/PFyz\8lT!Ab֪^__ju `%$A,+^T,ސū b=b_w.u= vlrsq0o!G`e\ .Iq(`>ƀ=B[=tZs!% Ȳf\VdR:B[#LfKklw\BG"fONƉW#fTܩFޗ@P߳o (mE@ ?79Pwd[uGՇ:|<GuSZ_=0Zj W5yûz+ X ւy&flX+d$t \yѾ)ᘣ}hl/Q/<5gU) f-SA 9dYI0 `2 FS(\dy '#ə̽:Qч:8]12lFvu`O3XWSma>K]9j"/0Zݷ8]1%Z9fX1'05gHˤtfrRZWlJyWy#dN \n9=ѱ>Lx=>{HHdADK$up[JF5IY;ty&=%紗3ca֟q Fx;_Zz|C̜2,cc2 Vg9])sz?eib$a9( *wK UGP $#uh!`=V\B?)w ]__,N0mT&M<Hk+{2I!I)/Mlx'KSr6d+70 x +e>Ҷ s3Dx"a!&,;6͈j^8,W9c86e)l[h*l{kY/u~~~sęV"xC\)H =VSNDyޗFEF]̨kO].;﩮W4UIt5*9E1.kv) g8+߻;I ^&[Lrpc`,yLm`jrA)7#7тtIR)͛m@'%9:X+*~5ߤZ]\̽Kqi,Ԋpj%JpgSnh->R5(4INӼ鿵ηW7Ʌ׋l1Ns;# ?v}w٬t]p!/?[^[7Y?ҫGZ7X?kYdHcjE+XV~nУݫx y$׍nVpgʉFHXu0˲݌HN34GXݩh*,OonwB?߼o?z~o~z}?+uL+0N$%=E|<_m[Cxӡ%VYO|qeS^3CY}A  )t׃뒤UC9/p~#D6#aEM_S9Sby Ou5yam362x\xуspCk"005ЎeM> @bE,!,21PڷaZ|~?&4:`Ad_?``2^,pVY1IIJ1-09JLmBv93wmc ĒjY}bvsyTOzVfWbäŲ^ʫՇbRJ&+`*ER`e8~JNS sվվόX'u$10½2**ϒR #"3ZVawݰc)ۤJBDRlϭZȨ8[#g;L"8|8hDa5UTh~7ϔs">mPYjlAR8I:F(֡V)iDF0 /vy݄w>ljj_lſuR*|b?'@MQ?ݿPi҅JwFM:)+@b=+oV\ sXmHAp4G 6QMJ~6ٽWW@Un'tt td)R{?6VΖ]e"U#VdM& V:^؊>b+芭#*؊F-)ƒ$9xT&y,",['J/L(ijIC<|Wl ۀ GU)rou߽3J)mTJQC Q bbqF|B5H өA2 - q5H Rj~5HSdMn7Tnw-XÊ3J4V:h!&ޜÊ5lNd&f0%BaJWpཉ1[#k)aJFg yx{ceZn`ٌ9wK#-G$?Be]zC5ӟ65AtM,|yfD;qĉv(EJ<+Q,d[pI|!}O4XJmbW4Pe:/d~/Wm :ZsFފl|)(b4]Pj1^FcU:%f+id g*d 3s->g˂@i>v?Lx8zhv -+_6mp>؋l. H V Z)cByX\@~U 4Nљ87!$Br: >q!DHBÖ%5rKJq1E{|,Mj1%'@VB>4ث]ͩz`P"8/8#T*`cX1VRuAe~c3(:I@✲ZwŬb6:PgYZ-"A,43מGq: n0x>"l _>_?lJ)1\dJ!A49μ,%3OdaHY'JwnD gO1<$IH, :zk3\"gwÚc~r(^HAD  ^C4EӠoǬkiBjX0}:|[.P[ ]v`_X\ !`7ҙh z*Vx!@c(wk4B'2MU8hYR 2dML5$ 4RFEЩN{M79_ʩWI` {{~)q[-ィ- 8;hU4ʘ蛚׎rHFDyB@9:F'F.q艆8NlI 06.!U¦!uK虋ppEK@s 0$Qp)ܻ™Q{h,rɫWPx.NwC3~<<z<827uMpev?)هl,U\4ZJG8Y'V[TBUB>h&H!FH;n@!0NmmceDuK`g /-c\^ԂzxiRU /=NNGIyHRX*4K T˻iY)=xjж@X9 ֊Z%'Ve퐬bj?I}&WwtR KLݏ#l=STdol/yۖJbri[||ښJoqi2}'lȠiy6<6>OmMn_f7ٖ5SNM++[G+_|4 :IX̧ޜڜ4_m+<@@2>{&\whb)#I%7bSmXZWV);N(y RX+em*ZE tpZI笑M,vցv X\W(^aPfx?heLzP,ck[y;ዩw"S 6jodZ1YڊY**x+½{' tw\XaszGHM<\w/xI/ВW;_Rq[xnWD*77m+Ckm|2EPKU $1!G͓Z;Qxw˫Xг=<y֨㥱Qjo$mP1tڥHKȲM/,1Az j~ېX5rj4k:(pkt"(,Qyb -ieja815Q=kux!="{8 E_TyIo cÙh QZۺIA\y`9Ŗ0ZH:Hy9H0O 5RYRTa{R&Ҟi .4$#uJoɟwO4slrUtD0/-89YjB_g=-\I+iܱ&?"'C:'Ζ;w=vvǏM?-.~}>7rV&ՇkNn{[Uln99^OOʆ}h.+ofMWn+\zKMlw+ǾHkl개}ZlVr~{?xKA Rm ߗm͏Eshۢlw@^b̏s7fy}IM)4HjڝSrkCr[|?io'7vea) .3oӸf_(sXryzg.y\xY=?G xѪCQ.6B>l6{'̙bxAś]H-l)RcQx5w7FӦ ,iv>"+! knwS"Y%u7<'4C:?p WEt;ΖkIDe~ |t hJf<^sdc ךZFzϓ Ij~KLOmiC;FG]/`JRzth f^2kh-$Kn6Fg\ƍzCsY-P>7͗zΗߣ]8xVe酈ן ' x\\bϺ:ϪӻB^X)AJ+@Ǻ5' K5fL5jxd(M2FtavH,N/lV`CF>}yMSy]sӤ:1Z >G7OZT Y7>E2]m?O̻{'bvIv1A[\0N %A&AKRkT:S0A8W( H.\pEjqE*(!s+lW(q H-TjYp5B\&uW(X|AL."NW҈b]W #\`<\\ H-Tңĕɜpdd+x."f" hNGIorZ]^L~hVA5^z([uɢE?.3f ;:Ojl,#^z/g?T'\pSa >F'BY0 RJvTv\ ѲO}pTZ#4}7C~|b_lotkb W 'O@o5*O&P=67F)WUZ(wy.ŃتI YQPkEPv4&S: g$=njfjʡm:VTZط2!#\`u6"J+Rk+ToJ` D7'_g&A.B qE*A\WRKr Fd+t.BNT*Yp5B\6ڸp c]So:#d]`#U6"J+R;+Tنjxp9 ` ]Z wQ%pSp5B\C/3 >#P{:WɺZƃWR@NBc]\ P-]JY12#]J;M-|ڭG:rrIs`ns4[ҔozǦgqNoUv,ɣu+OZuT WtվMOnN%lpErOOco=W.!RɌpAlpErWj|g+l \\r=~n*]ƈ+ .#\`'+R+qE*Uq9sLg+lm>f*\Z+RlqoS&#\`ejG\pEjo]J W#ĕP0?ynr]6SV*z7XWcĕd^e+ e+{lԪT-pX>-|ąbܞIəjx Z:ՠ~ ZUƩ\t"-znjzCNҥ9#CiltjYaTi sH}m1ɣu 'M9:s7n`QLSnziD+7A|L-Z  k?~Lr~w-{>}q#eY߯{c&,|ҟ]Ǐ*);ń/KF?-> ̎%wAHceX|?M_4[6MD=䋊97l9!U2,EA{N+H:VѪUAWVWͤqP?q}wtzIP2| HEO@;Ek6lSHfwqCKX,.?&GknԴd56z ?y}qj$lL5T#R T# 6BSMHW$lpr *9t\J F+44Fd+OS%XqRn6Cqۙ+ &\S1(!0.rHOw+H2#j  lPl"]W:#\pErW ~eU:VVLj+ k<#\A8qE*Mj#V崑|IfPzQWHW>DW_8S3>] Ͼ8a1t5ƓWW@i兮·SqAt70wEW/ZsKBWgHW1D6ys*֠R/H]Qp3llcX7,fВ:y(ʳi&޽kIKİ{#/mMNsG [#[Hm֎Yov({+wŒ>b/t`OϽ(A=s}Aq(]2QbG):ɺ%;CWjjds+rvAt5碫T|NhΐLd ٯw\kBW?bp ,tute Հh)t5:u(YY9NEzbʻW-ΰoUYMzwի}P.1ǫAhC@Q#;H(㙟3u fClzߏ_ꦤMWS#w7Ro_hmZmnn,7{s8ͶiC:ro]m*߮lt4̼/1{g~_Ϧ#4Y_~p>jW1SĽڵf#?|x0U-^|\XP*ʯw/ebf5x]̔ GO_g7h}z?atZ][My~o_5`'u_ܹؒ$cȳQՙ|ex='򅒱x9oỻ6{HYp?wH/n[}65]_^G6y}qmP9> l>;JK.je{C9eK܌1BR*ƨTrvڅsե d{PէOu69mHG˝Р`VZW"\G5KoN]e mɢC$Z?'Zj YqfUɵ`0"YH5NWJ=hRtه$Zt-@.7o޼ޝ+5kCwH[uw ə %Z5)cĜDh{&3k ]}crѽ Eg[TKNIP}"6Z}|?N4j@@R+݃6\&]FXmM":Nr*0Lߤۿ۫T/8p*3hi/:fDyKƜm,'!R>>7'!ИUj[k*TC*)*F[q{rҊ  u})|X"MѷHI[t%qGZGH }5fI^TBQZ:ՐR@Kflk6b s 5j'b>'3k^,VՐ jՕo* {꩛Pc!c搃G)$XgIQH!Q!S#=ԆR3uiJ2U(P n\ L9'X2rS0ؽjؠA[B Z 9xCyӦ;m2a0`(QlnA!VϾė1Hm1lt XBs+}S`1uV6 ]k1bk.loeӸ0`M֓` HJƬG6ƶo̡  "pajGU((BhIaSO)\b0'؎~o+]54Fc*[`6/(\dGZsY ͕ꛢDd(MJ􊫩|Qcd2 C@Wћ+dW40w|j!ՠPwq+"8 2nP(SP|k $4( ""*"ݒ{k]ѪPFg.]gԭ9ƒ9 N0t !.A ֖fC)q$8TR 3gVD5 ֩qu,`NBQ AP{e*3;(@Hq`Q g (Ez@ߑP6fA|F*A$u9ch 1Ji3zF3yqj|kBBmCH T&%Cࠄ2zN olGҰ[jE}sk4-dPgaPn3RUNJY@1-2 !pB.}iyCguzwi뱬No/Ǖ1c-xPZ013l.0AL6#X 3 o byPT8xiT.BϛБuJ&6IW:XF SQG a (`QBgąA9AJ$rAV-2.e(m 32]kh^= '0A)NL5et<o 14<5¬JrA#kP-"V1jV¶XeXBTD4X!O/[ټcTrs! uKе 25XBh;K )G72b:EjC$*JPKB16#tϦa4( X4kpĐ*Z MIJ X::A:xJjw U30bMGR:bg=Z{N :K'3'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q+OfIN v r8;2Zq(0T8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@C- 4tYp_hl0'J:G'P $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@8>Z[qchv{}~Z_wכ@t #HyҀx)%?6S7. Ÿtƥ_ }P]]0bj5S3q5DWpX "\VK+{ թ@IZ *qhAtȕjuz)t5ІI銕 oa`q XPއzC^poiqkaPەr\(GW7ݡyٿǭIZi=Cᅍ<9rvGۨ-iH:.)4}$ڿw1AGUx+4'rc{6W8.&s8`7F5Dƞ,ݯ$J#j4*#41lUJ%Vg#ueU^]}`<#u k,F]%v:ueU⒫^]}L9=#u 3BW]*E]%n}:*qX@uU٨Į8UV1ԢWW_$9tg .9""\~`'wdr; ֆ^ _a K *LP0H)!TJ7~ x:脹y gQW%Al~V2US2Х쨃:Z "ǏU>)f#W ׀s!:(eblz;{d 3\U |HKY$d?iY"JtPėkɻ,wV ;_j1d߭EĎ?I(u/G^|0vai+b0XsO<Ws|1oNڇQiu9kk׊jJ)o6cьfu_ my7-k&M!)7[3^ޙ0԰N1\?/Ur8Xd1ˠrVq6*Mqqi1 ?5oׅCRhdm5u*/#a{^CJ K O!BiZ R(&㰀S0{KY3-c1+A\ux ,%f1crbyF TjEWvp )F8FH\qz!RB=`3˥(`( )q1SȐEcNC;T bȭخvjp7u @:\%r-;n2isw;:ߵ't::B8,`#" 胡\H"gl'Fh+nd:iU=4^IAX*/[IT$p0ERaCN'T5O7'uI6>1 'cšW}*H##Ն5fA0z] "3ci;[+z[:hY.CC1$PԵe73 ?ɎRJ<-Cp#+&C}j1b$-6BL0Bueb^jI %ɹK5tu*V߫kɉ\1(ݝK'ekY :M>8v&n:Ǽ @ & Ap+_:n5H/yL TQ*MXw!/CS[]\lwq;z_RDo7^yOxF1Z>EfuNmt̏m)6ud(hcV2U#Ġo*)*1QEI"yH~Nj%'N61хwꀯGnapS5OA3-Su<Ư=Yw O^W^db"kd_SWsk& C)%xOxB /4Ly‹n/3.bݼ 7<k?vBQ("1c›j3}D>qR^`}fgr^49:4RleK w şvXZ>,7׺ی>{hj?ޭ 9 Zʄ2a A}ẶjnDh:vF.x&9sq>q :#fLf4o 2&@+^ljÖYU@& "TK.$ʔf,Qn8as{9?zr9X|9k7vGU: fdRr"u\iǨb4B [Nej8EۤSE !I%i/e;QB;NxMwmї )F_ÝՍCIXKPzh5jvu;e}C^YHa71"/aJhs ]#u (WK±0!S&S8jveZ8`5;{Z!HRNg8>3b48b$g5 WS0G9+#+&!p![6 f6.$VqZ:q`UD*%d7Ed4MKc5tR0>w;5 b?s|~e.RUA(9ge r{;K%xuYW_..}[p۶B똬my9,_/vq}a(uCViZʧ_۠"~HR֧\`1Q|%<3ORTQI=DP{hXFz/ |dv:<SVG3 >v6"3cPaqί;l@K2Y{9aLAXZYV\[Ϻ/뇕pZR 6H+띤ڦӁ2w Fō3[e!oĔsL(PmGs+:2,"1rbrX+B"^쌝Z·kΎ'{n~3QmM^Rz9)"(Q1˹q&Z!DЀ.#Ƙ ʹg:a{s֪h'رNN> z(;<ъ yԚd@TJD h:=TxSYeJ[B7zf֥#3Ǝ`gS=˟A 5y{ JQIJ4gtauʩ)1=^}! [\u"kkG\p|Yd&d mU$e6>Ó|aSϖ!sgdKΥ:ZPyD9NFd.8g)tgM2-k6' DTPf*qr F9j(<2V- UeyvJnב HitL*",w ʰ4,~d:p0< O`뱊P}>р_{&Hl5A\Y#8PڋD `.J8ᖊ؋酲/B:$5A*g;I E 2pS,F8Q[n6|;qL_ȅ$!;}]G`%b\`bKCN{JD{rmG{?0&u zS u.Swa"E1%xs:-Y dE-6`M@ZX.*lgWdɮ_Kgp~kTX_' ?)I=J0JMtyח26 V g ^kqʥeצ9װ+(UJk}[;ߕ7Ϯ ol 3cd|q<Ɩ6E jYy͵+kn#GEᗙq'~u{'be4H\2W$dRE:2 (<;LMO2O1o|b@hbph}1ѓ3f7>UO] rS7wzIH i q:?OY9_ tjhM>l0}}Wh9|//o~\؏LΟh)@"G@>f~x= M[M)k%e6KL#cձձόd#N.H10{50*g b)""3-X_6JDn投P%༪屝'}BϴIiJ$YO}r&27 Hq}v\s94=P)t"h0I|Ê7OoZ?#SAo-]h>6uR*7-tC{6(N NOp $Ϙ)(c`3WV`gdLd@3dU`I.X.]<ciDD(zix\(J۫r&{4+e~4+;7:z"H0%,J鿝unΪxkGlM& 7lEw<ي\ݡd+V WMIsV[!M6hmIb<ט#T:/7!tsd>M1Xw=uM)FՀ*=M71ĔSJ^6dүqJu3EJ!/0:H }!}fGsuz}^ +ϴ+IԦ!P5dLc6d7=> aņl8M̠aJD-`輒墀M G4qW CJYXf.>"+,F{tŭTf3d (6(lC4h*Wwo7d6~[ 'tawu}>fwt=߅z|5mY8rh h]^/5A\(1jPvu3k{dG6;\ l; lunM=|NVyp=ooh՘Ydm$gDG=szk[7Ks1E}D7Wmy(yz'8qDqGOtA5/1|"|]+[TQ%s搘o0GG*͛, `TMݟSRn9L\krADAdKZ.ϸ~􁊠,*r %(itV)Kn}LeW2T"L`13WWgO~14WoߟAbze.ďnۏpa5C7<4kfӲ߆/DuLl7IH ̄̅M:9گV@zt&9΍s1I69Т>qm6|B`$RWgig==.=oJXdgۘݜbF%7Y>#9/a{84^| 3ZW M0 c%4Y' {B2Pr5BUwd9@K;a :bV1h8&efֳ,k@$4 2̵Q/7@vlS.r|EmL)+ DJ!#B49μ,%3H<34aucRq%;7Ig+gO1<$II@H-Ux=vk-qv+j:nGCV['~xR!-G (*>Qs 2ZŮdvdperlJh@%KQІ 0YWEFc!ǘgV#m6fAO 5Lw9+9n$rfXMXVVCUBcAs7$ǚ}k$oMYeՏdcD*ے@p324:טe׍ǵ2+B0 &R$Fy"!ce\"!e֎ǭ%nQ\]Aj+z-m+^{C2W-(DPїbƫR2fr9"d؅Y&*Zg9J)51!I`pL$C,N[RHWY+Xm}<9zDqS-ん-pNhu }F8,sGI5NHr3fJe\\pGJ&2I  |]ץ8{15˧uTu_t MxK]Z ²J/MK-O&tTK󮧗.RJק~饝b  l;*Js*pU5u*RZK+KկRd9K"tn͚l-*\0ΘI˟"~2e})fB 6FkB0?j\z\,P.CF@3=cyid%P6j׳68[1[E*]_޾9$rs5Y"<$u}r E'£q_5rb]̓DhdqU ^2K>K7+Q%`#Bg17s B)x̛-ܮyiTR!w,7.H/XBuu=8HG '>Z=Z=4[G G ?Zx紳S"'W\]}\i%:\)5•}Bp%4w2pUTHk;o]^"\c?-^h% FWo/?߶e4ɫyWFvL矆kYqzj{ե8u13x!0p+"8gLb'Kf4}MAG4}MAG4}MAG4}#h=#h}ͮLwgziA{[Fozfz"WWM>Œj2t& 21)i 6* J'ޛ!0췈Fu]KV;S`mvG]xel[ U8\ibmza'I /b} n_Z7ۻMvtl ~lRˆZ.UݝϯyP oo1* rC=x<˺5B,v-Mw׈3447a^I6̭O͕k^1ոlL^ 9$wwJ& (sqws#ءƭ27U&X5wJ9 " RUeKQ@?h@ENcV*?l3ݷۚkn!6yif6xW %P2&2_@#6%PxZ\ؤZaU Gg8Th-a3 F")Uvuj;gDž rW1ǻ9ŌJn=|"f5HvB,$dY N%Hk;~GR:fx"9⤲6M/ hiP7>1lSʓ|u/ ٹ ա Չ/T(@tŬb&`ĸz%x Ĥ,\{ERwedw7[>JDeCRhB:eeeK!C3Ɠb2Ǚ$}fOdڪxnH Ig+gO1<$II@H-Ux=VgvZ׷P:y`uE:?-fb"EZJfGV W)gf4TDum( ޞuUd4r{Vkc\`3)ڜ 79 3Tmd&ndgTj+X*cXxíyXg_9> K޻'֠pu8L9bg&˅(aj{H_!)9d~\.@wesnILjI^!)\ EQCd3zzɄ t^tfud2_N@ 5bxH\ ə y v(A\D3b&'cZ11O{[8`xP+uI˜4z򆢒Ff0)n`bM +"X֯u *夳g,릂82F!R$هCXzqR6X 9q{g=j{|bmI/"w}ݎ[PoF*6/xz/PU~'k=~ x-Hv={% /RGrJ٫3k( !Fm\䐅2;&jhCPb&6.)FWO|>wø%&qL#x?mw{0l0lvNR^L`k)}s0ۇ>JckY4Մ# BYt(WdRGhUJ z;,?|gm>Ϳ^F_Q 1&i"G7|^?3Sn]Go$8Pv=]eCiM}6Sg7nA<=1[@Dpc6K:-^ 4`1%9}p}w%}diJc&ڈLl}0p:eWkY!;-BN& ;12Pf.EZS4A!Ոug=gu7 vI/X:|_׼1f0Ŵvk{+}{`t:B76јAad) NysxMt(Q Hhed*сҙ\CU42HH:hEyFx*燨rnJoUR?.^3^7ѧunW݈~di6`ebڥ(ȤjTQd '($ "Z a͡&>6 +Y@䒋w06HDPm*!&Ee35QzSj2>A@d{,+VzҚLI I CLX8 9nW3tCGyUxfDm[nj~ƟeAGLbt+lP) >m+ SD]j~g2z:g@<x[`@"u 2l 67-"L۔ 98%p01HPzmrj?zn>iz{NQqצokSΌ\t3iO?~\_ПC$0Wo췣2|}VcUm\לO.e21o #{A|hQA/o.n4xZ6b'+{4yo'O5zryCܬvݔ)ojmru7R8w'E4RFRF+].o-tƟ"!Ko;㚦av.!]f7g.z^ oGf_ob;g~Qk^+1j1褴TiV:YԘ#'uy>;(!|{QB=*uW<݂W9eoKؒ‰`0:+,E`x-m9S9ofTJ6mZf{nZflE//?߰^9Y= ѭ[8wyB-g6q?DᏙQ0"!{HCg w;<'nZ~(Aذ@fx~ۖh|X^`GGzx6oSq; Fao[w~īqer;z:ĸ<-ޱŏG(ʢBz젣3h4Ӹt *MePRÛյ vx_Cuԡj׮U:P,uM?_]M_Yv0 d0i')'ӳ뛺fMy\E &1R)M Q5蒈Rj*X-u,ZugWLzMA DJ^h 0kNHo( )!mJkƐIk/)!)d=(Хo_o^/F5w^X~-,yh ;@=cv$f,8n#Y xy3\+ӂD!hi\F`gKÑ<N|H>!{"yf^$H8|A-&`rPѪը-ќDA F W?=r DRbQ@Xx:"̂eϼ7qvV]?m=SU.T%L|a#ryxV>|>s^ S1,YZT>~ƆRD ",mֈ`J5mZ|eZ,\LڂB)%H_ L ƒ6;".GYu7qv,gk 2hhJR$x6ʥK3=u2FLTLRJN K@G4&_|eB\`ƺ!& %$)}*5 $bMPKUP,I"G贤[څ̾ӠccqLĚL1#dt.!֗JB.&l[ccӜjrk_u ܁Gr*8[g]zr\Ax01usF2Yl^ n;Pq%WLwP6GPJ &l)\6ʭ-gYJXB1˒CJYubQHؠW@&!@6FYSBf*j¹E›&l 7q ڕ%TM9$ UBeՂMlY_҃BYN`ъNE{h 0.۠PC ^34YPQ9`fQ WL66zHz[ {U@TyjţFt6C[ai{fp1gw l̹X{~S(`Y) YmbDgj6yNvyZٵŲkQZ 8@I[ϼ8 ZZ~&ɡ>+*E߯r!Gg͜ʭ}[ak)z ö)zb=L8^,&ÄNI--FI%4% SuRXf^V[]|AeNLdG5*Xcs5lC> T}!Nyy!aee+FJQD݋\Y+L*U6jW?D\:H'{ȍeJ_f~h,v0İ,9ng1}/J/[edlmź"Ϲ ƇwQR6hTe@"yi{ N ʳ$Q l0ۈzl Y&Y"η?q>bhg_.x\9DRA.MkW NL0)IW!Q$)52Ja|m\i\W*S+~x`jyO=1_t ; @rNA}SKA)،esm?`! H ˥CR*\ sgZYEa0Ic,>Z!|_bn+Oe+M$M+A˾wtFަ'mU n9c fxh 0ٜJU=O35xr3¯ʇ·7l1c93| bw~Q-]pAWt3ya;к;wb|݆koY>@a00 Z0bɇݴ{z 'Z{W6uY%:Na:Fҍ9tG.U}3Ec\ Վ{8VUJP>{]YoP_.\_~}gg~ uޟ}-:XZk~ >6xp~n-ۺ5ozkfͧn|6y}EmjQ˷]o\GMW?ii>k cCZ2_WlWɒq**~**w+Q!v#ߧո',%gިN*#{]+J+uƃ5DiOOd(]DJIDh@,^0ż>v : e-'ࡂK&5 Fbxa:!%߇ 8Ln;VZp6 =K̫ä "\z)'+?-?ö)DhTH9(-% ?}@c_7Pʤ$A{{qx/gF2PFHJdiy JI ^J 1NFXZYh6]⭙8ML ;9|L.G[ ?yЕc|u _1|| 'uTHWXm,1HpD,b;)-2$eVi*nXV^bK+m7oM Ԙg5L"G"#@m6'T }Q;>`o"4E>)\ Gw3X/Qx l_~yR_kϳR&eQ,ݥ `,qVMK̤޻i-i|P"i`Tj6E% nY|3f\†uZ\T}],O+o{ /jz.w&7SAzJ7)ܠ一X oF3S襍" Oϣ%B޷r p > s%RWzB*YY`*+;JuJVR++$ \%s%=J֪΋dFRX+4{ zŧbyt{Z_U~. #@SD@.5r㒢SN>fZQ0o{nsc*fF O2!04k+Ӭ$[0M$c9  W\v"W ~H \%s =J2u"U|@c_sF)Cߎw hF^tCݭ S{d6Б$ScԘ9XN#sj̜3w3)̩1sj̜3̩1sj̜3̩1sNʙ1sfpș1sf̜3g̙1sfNg!O;0,sf̜3g̙1sf̜3g̙1sf̜3g̙1߀R)7}3u`V~KJ'CၱXj@0KsLiL )&Yd`R$8&JT3s.%A# S: LjU5&^a  Y@S@4v1Vٖ9Rk<Α!Ksu V^v fqjG=7.a}ir4WѬ-򢵓cfGÎ(ƌIBҺ|?y'hFfQ"eԊui7% {)>R;#k@N-ԝli ̤^IՊP+JƵ>_fUvsRn&x9iVґaYqGkĂM)$mZk<%=dD60q2=45M@j5ְO5|H>2|-<4`0!7.KĔOs('b 1Ѵnp<[P k#JpQ9d%iSrǭr>(oEV{R2h}a 5@$@ fzvq cY+<YnU LG eZ@kX$"﵌Fu[!--2.`~變 [4蕶70nN[[cΦƓpU&c Lfzx=V[ۤ͵hx.CP^6WjTc0$1(]BS.zhJgeIsճ[JPlznj5 ?^y#Vy0779qy`IwG*d{ 9}imDk֗e^ҝHqr݉/Nl!y[v%+Kks.75wn'̑VG,^v׶T2& uRɲvߓvY@KNhwZ1k4l%¦\aK;K@.ˆ^FcD2Y+냉K"&ZH0<)c"-kRqhK"a>>7F<|aG8eZh;_E]fǿ$_^\42LRJNsbBVhD-XϘVGD"h@QAsglrDEOJBGpk<>W70|ͮqr\)j)^WdeJʒ p~V}WzU|\^Z[FiK0~%(3): fA'A-(1횠Ĭ;r'[8$)k}d>g4 ӈEJ˥1@M#(1ؓb7td-Ǘ]ƫpd: ) PGiS+%TiDQaa`"@mD҃Ό@EƩvcCR 3' qfaHB9UaaR"ڎpm˜j8UhiЅ쇭.orz~.mHioCӷ`@qPq$e35bZH) ɂRCץ5sy|U[4&KzɮHZEq1F& 0K )ƞ`SwEQ/KE:X E{ĶzǮxH[CZ >e7@i\J%Hqy%;;M9|^R2RxۈŬ*ӼdK LNXw|#FHݡu ee0l9sblȤe>b0^RIEq 1QY#se_?_˓<^N{FPwR q>B_$G\I>UL"A6HLxAGXWs{`^<:|261mѡUaY.kJn[v}%xo9@ì?; $,Û %A/`ucޠcq-#L%ȗZrV:Afx֯9捬9UNUjxNf䢸TmNRjhivZ4Έ[hx_nɭt0f>oP3 m(b y Cfp\߆Fp{_wr0YH{~ΗpYzϭd={0-ف01fm֡Y9 na]:nT)P*ԽQ*tٺ>jVJ-c1s)Ȕc`) Œ9K˭z32sZ۱]HL,R[!V{1gz4‚SgLǽƵ~NzZ'3rzW^l,o>SעzyF<;RѻlAvLG~]kS3OֹPϦ_ hC%3ZqW`qomnFeJ_6HxQkb&9*b Wmk̐Iq(J'ř!ݍM"]к!t`%exr)oXLi*s}/~xflnuFpP̜:NDNB@Kʄ!UVҩ GΡ kĂfǪRx}Tƫ9㩌M_M'3xw=Boכ~M[e3Ow0i:eKR 1V K F(DcR7 t*ON0—hz}[OT[Z#MT;Mڸ 揄q6(xbxѹOwLOx۾sխnUVǛ'~q1 Y ,,b"Aa娡Q#Yv16?:lj4ynoaN/G6|L7"``hD9-yag.&N=arGU;i-q Y0<`4e8s*%9B!(G1n0'+SȐEcq:Go޳f \wKWe;_Rm,5ppdqp&$LTIX=]{x^ꥧ/T?_rЅo'".V'>1ȕ?~v;j-6?FcHHu{7Ϫ-{iW)uBOYP%…( m0${8t|p VOtZ~x- lPlKqsۯ*E?pVN&/jrџ;AoX cxۇT\LQS;fz XMz0%jryPՁ*.sˠM'y\ހ]R)JK&)FU T4"Q0,9Ƅ2a A}0A*1j 3ꄷX%3_{v7^Z^*6{\A2MIIemok )(J TIRiJ%&~3~GOۍhi>5FU: fdRr"u\iǨb4B [NvCJb zԖi  5\,}j6\ 'r jn܁ñapt%7 XFD-J-Ҵ!r#By݀ѷS7 E܈two)`t97<dT+ҖJ[2^P-#ͮ\SiO ֖`+5&{ 8ulő:Nl-2Ja1*0 ^*m4U aU8¦NzhAZ# ,Qs-<8S2H$c"0b̲ fs>'K.ERkPDhm$Ђ" F4(Eƈ63)(`B e_0OQRG;q U> Lv=cN!.ۏӛ1g=⎻;.JXagaT~ᛣV0#*)|Ɍ6>)qDc2g->FvZqvB~k{|9dˤ&U#1F:-^ ẃ5O1L샳 y(d*iSO<Lo]_xn?zzK[W|â6Lr%ị[~ӫ^kAHgEeT Z?A˰SEnfԶ5YK6ڹmִHDR3Mk=rԹ6, )ބu{ (uX(XȖXR?p2UmTbnV[X gNR(W`;(Id`(lT8cUb[Q1:"VFudXDcZ)"V_"D4 +9>*nmʁQV}Eimkҽdԟ`V}Fj~W%AuQn)lAI.b$Ƙ4[jtI=Q̓Äg}g7@6Hs@ۆnZ嵪ڶS=w^Z;<ъ yԚd@TJD x< h*7[_abʽ3ø.%I0#4O[V[#gؕ\`.&x1go> ~U=,}ƌHo7d}U%"$d ^K2xINa%$sJ/ $d ^y-d$LDD|>գJ?Q@-Y^=*QIxGʠ:c-e2RZHk)c-efXKk)c-e2RZ|*c-e 2RZXKk)kZ;д8 ebe2RZXK/CTSf2RZXKk)c-e_jwwr4R!&F %”6z[bd֨hCQx&9 drHQhArKu-IR;B ':Z^h"ONFd209q"HWQkk\æH"J f 1 sP$xd9&i᪲lqm/LHwD@4:& ].aXT_+'!H&H P@CNM7Bý3:˟_$ .XIT(E"@ jt N0%PpKEIڻ9=6H-ӎzLhFY%6˨DD>6#m bGYNrl ۈ2PS4,F(95Qg۩O"H*hQt}b |.{ɗ<gaTV!AIRǾkEz(1ʸq= Oɝ|Eއ;qG\e3Lp+0SCBSWL J1uf8 {s8< ]]M 85iatH\Řo`B:rN* Q"cy:;}a^Zy.Mqe*A2%WO*It||_*aibh.'=D`C?a8-$ NoUj0~Yk *ӣ0y{&X[0wẽ~ɞ N@bo}s}C[s,"/?h}KnF6t*:CjhqנA aBģGe AsJ"rFd&GBĜX7kђ;=H_yd%A 'Nzg0oh=yvxϔ]d83#!FZ(N"\~\v[_#\3wtm!5Р*)Lj#pq9FQcDs# . j#ND)GGˑ0C*-$zʅ`Tv<577zHR#Av5ad| \*LVB~˥J˥h/reC埰_bvSA gZS!r iר 1qP'xbRxj8ML;@jpBG)`h=rPiuw=NKmU5)0 )E\H[,FI{g[E պ3d͟ &2&#i2jh%D~ϴ\[%#1"7{6fo=UbiZ#mɝ]T ݶtzg_oEvQZ/￴nVw91fO4eC-v7s[Fa6vw4=ĶI>3Od䥇q]_ͧuӾtsSn'q^< ۉ'Nlѡ(n*a+L[J荭j܅[-v=EQ%p昘o\ś,  ݏT\}Z:VY{Y3%(b?}U.ʁ\Qk&@PR\m #\X-LD3ef+kjgޅa6M+ V[ۉ?fkOhƇظU Jf9]4s`hBFDT[Q b.9 2),n]J O T^:]'{ru% $3f&BIsgpO0|] kPb%%uŠ,Ϥ,FŕqT|y3 h\ WØk+*)D?ʟ"&(9\;lq"Y`wdk~Bd gL1:)f\>H:i2HI @C>ԮR;=r&z՞pqjP\q8N&9EtP(%ݻbFǘ#-ɫ$CIi\< .^ O5:CYaxx8!?U.'LޏPd#JD[|^F 4:Z[#Nf x@B w4i/^Fd2mzidT\ GR '2AGIpjY0A#y q!Gֽh]Nߝ_ۋի(n2~?<,0̤Py1XGJʒ)˾4c,O%yꖃyK={>dެP ]OLki:u=nv mͲtˠ]Ϥ]-!%[ =bZJ w|uRY5ʼRBr*]Ou=l p <|u3%t5Wirve|$iyfj8G}wջd88(}\۷+<ODV?yt4c6lX0Oi@mгՋS nc*s)(y(2#9;"OiMZQ{rexkdW9A i@`[X7 iI8U-}咆R78Bta8@Ft\Aрe\+iAɳ:G'`\[k Ą}/efKazcdy=Vꝉ]g+gJyJ/PWTj%vjPVL6*kXBhTtnMӥ{!V_*Xi?38Bi:93{ۜ r 1F$MN %-`mᄷ\>An9LJźn6׾=>?ejc0L*-B+u,B({r`E1f\ND(Pr;` eqOSڰSFIP 9s ^|1<;:ok9WmNL>(:):v vPU{TUh'FÇ)|hcrRia$"x.DOb:3-n=3 gCѠ0^YY= 6P$"4\۔r35ŁԤyE7q#/6g %s$tesl9$r\GwM(Z#0awLmj1gġ}z)F~ayod%Tn}̧.->9ǠD8G M7hLYD`1)gna;X}`0~H5N] e9COZ6.' "ZKBr5R F}m'9԰8аYӥ_Ӈտ}_~]Maʳ՝YΘ0.[z4}ͯ+Qlr]Jd%Xn_7:r3Z~_y\nw7aW5~ίk{HWqرuSuSoj4zO3:s&qr3"0mysrk],Oqy 9<1/5߮2w4PaRk1/,@su:rj/.az]Bz<.,W13q3~eH^ϳ˭ [+H!g9p;šė'+qri^)jm4K%`B +^}+[lzuǺ>2)tŞl4n|Tb;۶ϻ;% .|5w]C|~.;qb}1m ]':zu] gum٭^J~I;#ƫndj^8^* ul+QulNFTnǃZ=rMMGf2֩dvzԱs%*RbYzϓ JD>UCi?g% \??Ch2Z8J!RzBi%ܕƀLYFmbƠ*M3OR8^\/Yh"/x%vd~N09S~Ŕ /VZcݢKI3qs9_ˣWq]ZB,_ofSSsGŻcN.1~k-c“$V9 km+GEؗqH/U7ݳt7XL#Ėݖ=I#K˖#KJh$HCU>%d4DEiy&93hU({hem3avc hB}@%F 8U\(K, K2!4HQWU\'E]Ui ]]RڦFu ߁wP#bm64#iL0 ft߂Fϣ[?M3$HIQDp^`oQiQOďS88WvT: 2yqhi/Ա[n4`[*z`mqnmmFJGQ&Ʊn?&OH1nt؆[w߬D?wjdr#yŧi]}` ru4-~rQ tl~/9ު?qo=Ϟ/ߞtòq|B9[mt{t+m,@G :L(s_Ba/S-X!,v0G01eL'!Pҩ..E] &!"tPRY⬥3 @\ /ߓ^puGӲDz[/HEy+ I!D}Oae'x%m-d;:- u`XJ-vۋ1ol֔oA= ֐hK4kɲ5fޜZ %DHt ]CDji̘%Ĭa>Y W(, cPϙH]开u1lSVT6$G$IR?8+ ͵dZBw$'ͅI$hvy/8zEcc|sUƄ֐T$,GCy̓y TfbǬ6@BȜJrH6`)Mt)/yGlmb *Vd4[t'P r‰Ć>P[=(z bbaӱf'~ʽ_ yBD(m`Q;`=w^#10̢haֱ U5.9vV9Y+2r~{R?=O|=s;r)͋aam\V!o_ԹD_Wtk'.v-^[n ii =ů\W'WuM:Кӱy7JxX)D`%B|P(=(6X6;]?7zQn^zARt )B8+l P WaP#7sŘ#*ZڢD% 1Y' ]k0qBTpM.=񛢄{_yxt(a0$m=yxlgGGktRY``RJwV}ORRToiqZL.?귒`ƿ`-lS QAbG&GZ6 5 ߮Ķ,[䂨֨I+F)Qyi QD*Y h Tߞ\xEI1eH}%blAIB^@%SP\gzbSO[`:nwpF2@Os&񻼴t6E3=f by#:%q;UA _-y}$+,BPZȉId5XvAȃd0,# +AQ $'Lt>u9# 2A  AqgR0W'I$EI, ##? P[BFr`98Oxz=ꋞtOWBY4\(JrPTX%*;BW/r$| /)^W1tJAƐ QgJu}Qkk@ۧ5yA~~S@[3wD@Ayݿ.= h2t(,v:{(Y)ɤvIȱ~c!B,B QdJ H )2:GmNC_ɽ: w_։A-򞑶Ε4q&7yD KuG]  L>ZJhPEZ%tT J7 5&KXl7@p]}MS/Cb(;ٍ}#'R3tR3j &pdKGt.Xm4@f4gw9*Gmd b")NBݱHwlMfȤƘ%yKzBZ[RdcxpxU&k> N CuKh 7XCw z= ']~iuVmK:"H?|6h|4#-qf4s" O]>w.:)*䱄ܲIjEKJG%)D!SQTӹzYb9Q(J]Au-cb6m/X("==?~dXZyyV\x8$ HH[,<ǐ2$@5E+(%q@k*`Sd&+ *{Kx]xee3Wjpq`q8O 56*_)_ [#Xk{-AWO%Gcq|2li'ӿ dBȎ7Y|ʙ\!ajTXCx dU}d腴E-a*tѩV 4B /lFԲ8KUdz"I"Ib yt4u 4u1(~]zC_0|Y~b\u=LF~箆ٕRx^?L m4FLjC}$ | qt,x'|N?MX Sn~(fyfHA(D> (Ei.n$9~[gQ_U񥴿͟_y姑C}4"yJdu*T7/u<ۈZ:ŕMyH%w2~{;B[(~-ݣ۫{T^ջgCfZ=<ӓwg tl8Hx.޼=oZ[Ζ57Cri3w'ϼS:`}u'?~=m'g'Wt \ڪ:V)If:lԆ O_z 2WHΧ^{zRZNov2^œw<_~~~۷~o·o۟w?ʱ,6o <Ϛ7?=i7Tެi@BZ J0dz$~6dI> #%Ek Co:yyޕlz*גe SʺDHTl=Z,$yD^*TR"MI j麜٢(FQR)h !EI,"%rKҗc`yTeQ‹-ǁߗT%FET\% @R S!D3"\^B[ǁ T=V8MAߍU1)0 &ơ % 5_h6Em/>틛jyj5j}~0ƔӤ% Z <'z?7מS8'>{H9U|?{gFnr]e+ @j_6~q/T]&<:S"koݯ{M3lR'k=_ &\HAD FFV@ik혃Z("uQլn CO|v_BP=uBmHNksW@ε)*K iwդTWtfjZxףcW<ȝEqtBVYSV0@S?6i67+3r7m_{_7_p֕lV}=Ms|C /{n\nQap-kկdD\Kf^/:?_s]L맇_us2dZdϗZ'pT!Ws[kЅU] &gg}zW%_)r|-rS,BqE,/Yjƥ(4 n=4ɅhKK*ogCO/M*Rz鑥(\~:k*cfkykqG_6Շ=~o2og%1EJkm2όyr2BeUYxV9[~z3ݏTfț[6GʣrEl,}<9˾[&q1.Ufj3P2PYi eH?tIn;$kH{º)yՏ_<>?zxwty %fjj>y"Od84 3S74OY wLøPnC\YTm"ɻ|aWCq~AevhOiKd+vwWOz\??< &?q69fR>DW}nX[KBӇj2?Mf5[*iA7_t~${& ?Cے80)|6omPZוrT uY-4\V4}ӹRu4dˏ}i]4LLٻv5pE_`{gfuoJ2ƝpA?>M|j@4n&io7Kpşc-~˽mӷ0}_:z'{ Քۛ);,촮 v}?Cɪp8?ЙZ )6jP[>?^ 1 IAD:r gNZC_$*mUCZÆ63sN~:xU^(JIsgʃuFu}]B)(0+>T E})Alt[̯wT \{ Jy8!W"SX9uBy4Gc/LkRPônuXPCл0lHzEC9?#CV"fc{;GF$@܎=w!ܼL[ $Ch_ Wzu֭ ~}ufGP͋]-tfa[ufpz=.bqNHu\h9ոhu\H]jtohlD1D@hBI,Zω@*]:d<ƥPpJh ?&\p0X0i @?$45>2}YJ]Bح<ǎ+9'ԋ47=,mZ\7o~bK^M2'<TOv葯̍Fo1be5;^rӋv:mkhNvo?}nsE䫑`h|5NыW#.~F*5K}5 yc?Oh)WR74 Wcĕ諘p%cZG+k\,"BL&\WJWW(#hpEru4 .t\p5\ii K)U:H W#ĕP`2\\b T^%\WVrb *`hpEj-W%jrR9#%E8H ޻".yWc!&\`,\\ \%5 X WN s`F+Ѐl)v7m'eӽsq5~r/US/Si+ggz`$L\듗W(Xp H^+Rm"V'\WB;*"\`g.]K6ȄJnW(q Hƻ"V+R fEL2xjڎk;}&K }kv4L /-S.TJ,`¡]q6"\`s3  HTZH!'bP0\|q?gwHQ) W#ĕmW( W$қ5%\W ѱBhpEr%WVqE*C+Вpu\&ɻBU,KqeG+x< U:\K@XqjL<Hf  ޻"B'\W0P9+k Q]qE*mw5J\9Z;%28o$\90L5uulb\|O~rot9vaJWz`%L21 [a5"\ZTB- WBqH j Hq* W#ĕ`5.\ZHHqW(X  HW+RXqa,&\`wErWq*yh)΂+e,"\` : H>$L: CZF+ly3Hrł+TNx\J62W9UDBBƃ+kx,".fB46az[K`40#u,)]MRGTZ΂dj-Z$ 8O`=pT/t' Ld&&-Б:( '!v?»z *zؐBC^0 "•`1 HY+R t\JcF+ɔ =s}'t\\ \Zs̽TBh'P΂+\Zz v2`hpEj/BJ W#ĕDLBH H.DvjeTF$\Wkcb F+k׮P,/_?(O7mݴC#6zmP\E (6tv]#|%MG7-NgznDicw6ٻ8WzTUE`ag?,ӋqC}J($5+H+1WJtGUeFFvW\i^!XNoslׇB[\M*3g7w8s 3v~Y͉0_J>O}p7v;z4qǏzčx;Pw 4G|BuiÇ8 |;柳R޲~"B@or%}> MW@vmݳmy3j¿^&cY-=DdlM@5Y]M.Gvrә.vvE'b*?|ow?쏐 ~DŽ ګ|՗i?_ׄa`Wem?LO(uw5r =W9&K@V'eF6̍}49e֡1{)ԩTjs57VݫRatlmxH˓}=wrVPqtQ=3dRWY-Z0Ę3cvK&UZQ37m0}ԫN=H)|6n-6а{|k o/!bwkK͆Viwh춸\L5CѱE{))w{@%[|_;[M1tca'?p+bC&S$sX*0Beȷ@.\Ye)ރ%v;٢FGxKD!-@#6IگիsJU!m%=:fDyKƜ 9s,O/Ss.ޝ7g!UE{rggj*uoϩRRXm)՝s2J= 6}kt KavnE;IFB((E?aAS‚ڈ`DmT%FC&flڸ\=D@,XvJtQ]|=u5)3HTg*HL3%Ġ:*D{j-xԬ;v4F%z[(_ "́ ]d0`gbjb |SZAm+` GE~0P8jeWTd$<.!kBh#JDMFߚ ;?'UՂG{K.0AACZ$,>&%@uPyPmJ_`"*E$W{HB+ut2: Ρ&y֣EΈ A9AJP$rAVdO0yFм&>yL_6 5DnG-A@8]_TBN5~T}LUU lZ /k$ S`M0MvߞnWۋ ||(gyP'KS=`dd߸ ]{ #AKzu}ΏAXAJ./sXN3u@R@"T4eP_0F/Z%l-ѧ!3Ўa:9@1^!B ;h .mBX[v=^ e#kvq5jc:LMUQBiv%&HtyP*Zho:"vTpqo0v?xzZ,80qn#Ξ=?U"Dq o?iG_n]:n>lުxv#!my.1X//mI3t`~&3q>VpY 4o\(x'ШoN@ uJ@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $NG JV+r0j@8ۿrG;J @(8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@ z[֛m<[MǨ .Mپb(_so{Zq p_q0j7.b\z ƥ,=(sEt5Ǹ\H@kC2QU7oOFs iF̞m|C!b?.፲gmEOh,+~;~NZOl4)sS&e_F͐B$m?n_Mh? (I=(fuw~,oEth=t5ະڨ@i#+CZ]+`6%0^ ]tJ#]"`MpZ ]1%@Ѯ8+~VCW(o\zqQq6ǩ`6ٷp8ulx0MvIﵲ1ﻲs yZX%gP3wm<$; I0; CnmeI+v{sXXU֥VUN}E|<yX䌄( 5"|*42 >ȇDԌqƔĖd8` gH*LhG d)S G1V)8M[g#Z8AY]!`a2\hl:]e/yUK"BU+e[*5t(%!]1M)k]!`5t2h ]etQN GZh&1ϡ5tjsWP]ChӦ *U-t% !tQN Gh =sWW]!Z#OW2''t{X"R+k5sW-o<]evtƈj]e-2\BWt(KdxtME =2Vj-brK=!R/1Xsrf>`n|.xea:v1KpҶ^ &/5>kG_] v79kiˮθh+K[q%?~?j"VR [.`g q d剳I9V2fsp6[g8R"ĻO;~A ]Y*䥊(Eh`l80uh"8,YìN*G6=3/r, 2ٍ%E5]5;kq.Uɣv ˯ގF8=~o wM|u?`Wk;}dQNK{ /BElFgh+)/؄Xa"wPʗ9W/ S;OO>_2{Zv46ݸBsE\Y整wkiwAZc4^`87¨`)c *T4xE<(aMZ;Ա %gsH<&څh 0Nej$FYD[Y@>uC8Qc#w2-A<a ,vX4x {;Zو2_A׃2(hԊ&.؏n TG vh1ldhښ"= )EYt䉭كk߃wY?g*W_R5BJVNYKYb@rw|Ž+ϼ]8L: ŽH(g]BM/|$˕E̊kG/@pTL[QHlACl^iK7{ X激7g$,YY:K6!񐤁d=6`hGSN(kQI'*H.phRwJe7HJ/h)]^y{S׆QcrVE(ƚ y#s@,Od2k]FNH= EʯHo!ZGIeiA:RD685~Ut>M,c]ʽ]?_sg qp<:(mU$r:askO\}cgbNYyQ4Yms F-%F*)㘥Ax%$EeR؞D(J!$H E^{-TH(Ar &$68X6.gʅf.*n5osd iqL/ywW@ap0dbȑw7r"CMe$)5 MLıسe4&ҀI5A%vDDH"D53vmp3†l k;ڦc.} !@HϸnQA(pHiy>LQw-v dIHe+{ gQ!ׄH<&^bXģ֝R|X]N!fbl #?NdD 2^Lj#Vsx 5`A Kq Fr79P&jeD!iQG$0t=2ϸQF E=*i /3k"6C^5i^r*/Қy:^xH@ϊK/@`P#J)5*)S!aN$ˈ¯biWĉuṠ4>| =X%k)Y[MuُƦ>Y:KfxMoGP%%羐:6DNQp*|2<0v ቐqA*m CRL:N#PpJ5UPZiywM&t`N<Wo>Ct=v~-z4ܼ<$a7L$>@/҈G >\%.Ĉt\YGՉAmC*&PcR t$F}ār x d9XOkvwu54eT8s2 C ?xCXGZz1f)aӮѮ@w!tu~=x9Urc0.Ի̯zr]v_})Bd4\=g2o{0XR[1yƟ9x) )1Z8;D\ScLՉE[כ-m3;`B3zƋ^9l/WCw]GDbBxN n5-@-ڕuݻ{ݻlwDOV qakfMu GD{0zIHs^0u\ӨA{fAN7wY-RJ?u+˫3쏓q2߾rqBM('`s*a;"e" P`=# #侼1tpbH,F<HMbIIGsaAPkbiJ;u]܉s#\ӯ>17 #yuQ/\TW,@jD ږCP|8sR* ^D.`BN}LH(CቕvSQO%t1UA+`S:$at*nAQ4Hc<ʄEMұcڶ `~)H66fhkIt.BG;UJx4$E\%P0zcW8vhQ᳇u9ꨠne7pb\G)^ij/ =7_N$(h9uKK%RN;ǂy(I4ŔvnDriB@e(56ZmV! XțꕤE y(q>vV/%R+9  McF?' s|<ԟ޿Džc޵.=ŗq0JM=d QF0.%ܘaZC@ Wo{KIV̗Ƙ۟\i/5xֿ>zlER,'vl=z(:o<jk0Wv4#$G幏Y zL7ƛV WgjyJޟ7cD_eRSM MR Ev@HO.d}t}xr }-A5z,RuEYe=6_I!7<̙[^2$^YVkNҐF5T<׵`n]k{ry)y,}:~w%5LuLߛ2zПLSa$,:f6N F:Wo3wtIq^3A4^4Fwb6ib!p_6t4:xiZd-D4a Q'6K9H}CI Zf@JmLN,IèK[ ΐ}e9~6Ѐ3WYyu/9^|7ˈ@\O]:{u5w'}?I^Lw[N&PlIq,_@7m.14`PO{Dh0 ߗAM-#WF$zn}.YֺӓޗhAhQo߻/ǶEd0cpwK#-AoS1 vY\VM6&e+wAF 1jJJ9+HXjSc:~)[N^ӂ>*5] hIv1:c2&M̋Fa}UW&0(2ӤiŜFb(Dm&lf|;"J(Ay,IIh$Uh,y1rttN "2rnB)'r=ET"f\Cs|ǂfD8d8 –p-$f2]֑[Q@rg7F52X*SKh<);6C\2A)Px v<&P;w^{vmcPg<9P0sxfw!3 id4%+PV>ev&Emԋ }{5J3N< 3`ȤG/U\'l|4O+-O"F$ 5ZrHYvL i&eD0`:"zxN.hDHH:>(xNHQARZ'ct(1c]\PrۦvOAöCZBZ% B5`< -Rε|B:[ƹ2wX9]eI`%Snyj]\nQ\nEFVѿ,q5ޗ?7]&m}n >:y *?\}4 ƴ"AҚ"3H ./}N |X!W`!,\+<d:W#q-r+ ň)`O$ M:-0{>8C4).lfIc;_\ҴszG{떮Y5:r!k ($O mcIFEz 2t =:ؾY5IXeʂp*#3xrH]8*#|ʏSQ:O4EyW̺= $)wF3YdL4X ndwR)RIB"1JFDf;Gf]\mMNbVd0,$De$xYhӌr+GݮB'U7 ?, LXaE{׃b/4 ?6($G KW.?,q__/65?Mi}L=.ubC;9FJ.Éӥr2Mzcɻxm& MH?5󳲋`Rܼ_߆z6~L_zW$Y}5߬Z|y9v݆4lPtr1> [E]zhizhΊt w{+cݛ^n迵'o^_O>Ygwzjίta:ov@B[mvz {cz2{R.u#w|m7˛1bLa-h,|82Md nVpԋ^1ytiu8 1ɯ+WK[rnTQC{ic^_}_7߿|?θ7tf`BqL\K>E||]ۦ]KP[tXM6&߯gY m?,֚|׃}< "GԗnԜN b/οBd_ND^5QSTin !@b7 αroȡ|uU4 =IeQI\/aLIȲDFzbODY0BX&db c/'__y]o4L_48<(. Z۷ƭ:Qs}x݃XL?\y`è5rx2zVu9p$`!шB2" k0*-v+JJuDd# *֋:3W@?{۸eЮ@$Eә/zښ%(g(JdQփj-;,OU:ue=]`Pg*+tЪU!iL]`&Xg*YZNWk~(I ^"]q͉3)՝WUBU*ضG=]}Zk;DW2} t&Ԟ; K+Ct JpegbWV;tPbҕF.Ů03tjҶUB{uutNymK+{E18iOh~ka܌|^nd:T+0&oE'N^O#0=g#G=:OJ@*IKo);.CQC+Aq*]U\~rHg߿S2Jg̘&L&2o21EA܍' ,<1Dzq2SYYܛfzx-W{ɱ6$`3-9˜R 3(UJ~%*'b_lT▱ٝ(*+!;k?͸w#F<Y tf59+:Yuc'!ܕlYj: >1EY:DcredįF_cR[s+쇖~(y6 {xZ81f[Jp ]%R%G=]]"],1"U+EW NW %=]] ]Qhx:DWX1Jp ]Z~J(1]`.JpugA@+m;]%WWHW"$h \uОz*d +ʺStw %3*%%ҕL.MdwG]%>Oj/\쇒Ɪ.UUHg*պ+thEm.4#u)JpigV`B{uut ,i-j E7]b vر 1>|b|CK4=v?e_D4-z>1K+;DW#qg*VUBzD"R:DW dWӮeJBz@\RF;DWjmܳCi Pr,z@b);DW 䝡WU ?vPVer (]%v\MBWL( Ju'KWXUK:CW -mR.DU,3!՝2hnK+%]ww`:jO %P*DK+]w5uϻJ(eOWIW+_[s^% D,9!Zsz}^Qu4%?¥ ~Ӝ pΜղWes> #wسWTfۡ2_Y ;_/0y?,K%dމIiGʄ!URT wMmkĂfG뎲M-ghًxCjeIo=yM'Qqpv?ajL۸x-Mbv^^hY{߿ζ0gw#o/7WNь;`R$ӊ;&ޘ+\+++Qhz |YWUJIe#==F.c(/^wnoo@a7ŕ/~2Ԥt|~s~_0ixxz'k*B/|?:Y|< h|ydT(,yAt}ҏ"w-e_i\J9 :0x||Pcj3r4Mb:HBI!:D:Rgf'uF>1DK(Bgfz FOQV^jdQ  o=.ZǙڄoҢ?I j/QԴg7oِ^jS.nd} oƏ]4^7aꬸMs692\%L*Szpo~4~|bWž/hdYF&MW"{Ntvp>B/҇ `וd)[jįHw,];V˒q6n:urd4t_fݮ]ƣѸ;C"u;./M_};fbES1 ]QNJ2ZY~u9a[ny5JuSlU5373? R|e19b Ծus֍(4js|8݌K> fHei<'Ɣbry'P)[h%۹CqFiNW҅l/󞞱Z*sz~(y{!3oz]=vv*4[;`}}NL{;TZWI~mv؇/jb w MX5u[Mf&9܂N7!ނx\̮GID\0rgS.L$@^w|6r]30?]612bɘ6ӞzLP^xéSt_N=!XߦsE8Â'ZQ9!9b0Z3 J7 $я:=V)n[_|*hp,m=3{ _f Ꮚe =f AV)광(|ZO686k; S#Plf._"׫gҫcIdk 23ұ2IAV*Sȥ3$ʔf,Q2" O'8m|u"HцD۬9HwFU: fdRr"u\iǨb4B [Nej8E@H]*N/Tj4vB;NxЅqú18oN7zalEwm̕>1:!:(FU_^,Z.Q7b5Z!5wNdIk"mVgĈ)ID6ΐG͂D (WK "ydҗZFD)5zqyp_i Ύ_^K8> !&MZ'Na!>tVHo…lAu ROl2]XA_:q`UD9*%dӔ]d4&jciovv<_>֦>!:t7-{_.Ӱ7VvZp=6 r[W?}7^ G_kG6v?,_hp}0mAkMb}f1bZZsfƴ#Ia.&0 u( D1(w^4,ӿ^v)k)"vڃ>{`Iǀ2 & 1Ԍ#R` ٔVǽqD݈URRҺYVcFk3p!k sk YxWF&s4n3zBIw#m|YV/R(݆?F2ҟ՛`G}^a餋-"?tEZc=],I9+2٥}cbyI{ gU^:Y_ZEU% m2Ԁ׮93ocz .\TE確:6=i>٘$)Q)aGHJxQh}vYy&5v> \Th9=֢Rk^˃m:?ȗCVh?Y4Yhd[,!H'VLgqxcxm?ڳY$ZH\A&n5XbOBREgG#6]aobe A roS2!%N+FiXًtI IlNACHK"ӿs? ã>!m<l$f|)$c<#(AFDfGj/}\@uNhb6(,$"$eKȦI͋3rfb Pjvr F~A~=ג3(M P| X.Ҥ7hRNyf.\b߸b{=ߛWZO[xUI{S߂47>F1Sθ!eؐ=%5u.PKe8sqog Yfvm?fVsK%fWZ\`o4 =^|*( 'E|}D+?\owZ k觟3>jhYzl|2.h)/6~G{%m!=jgރgݛ^NL_ &/X5mv_<_\NV]2Tz7i]y_bpig`l]Oz}OVtvBi8j1FbG@_'N3W6:u{VܪErkthutJwc¯+WFG86*lhTӜ紱pz74?|_~ݧ_>'.̧Ӈ_Ѯ'I an"Iȧw]_k57Zءk`5z6+k>~_>*[$ޯw_ _ip<=-g4wq*n{qL?YT},^p[1!#N.6? OF!!=Ia7'qe| d2HWNeYv"#}'aDYBLxNOznvqLF0hcO9fW"KG4T+6d2 X;lT@ DwS\g&w7GnZkN[A^${H:O4LÒoY /|?~\yRV a4@RFs&~JV.)\lgR2H6H]`{2Hh\)+"1@%XUq Yk D)Cn_g^ڝӴ~- +~}Cv 7B "Ao64J]Lr=Tȍu`=5^J.:Sp^Vqsjoa&u~cǎ^IP?p3 )bn}pͬ22Rl 47&VEQC.Ҁi@Ꞥ갥NVR5:"?AZ9xҪ6BMdk$j9=k#Cªun!BB=|BB60qSr%<7n7Ynija4{QJj)݀.Ok.Ln/*K6LY:B[hhwr ʛ^w@;z #v(zGA/qĴ 't>Rփ,7NkqM&|BEsWp5q<㢾u@Nv}u&qN1U%fB%A*`C 'Sة)/`S4]XFiT\N7hOJ3n_X/8mn \X MVI!u 9nfp2c\yPL5*CpCDpG r1u ] a8=96-qHA!T]NȏNV&gPE,U5T&#&{w*u7/X35ܱB=O ӽ'_s?י=Ѳ&BҕmΈ:fCWK΅:ZcҚ]=C"aŒvQW ]V骣 w8x7Ȕ,Hm߿zxǍx?TvxnayD}** '薑<<!<,j牻R2@0WwhMz??}o3t/uJr/B ݟhĞPa|DF|z*Qt1IԿ^Y4룶YGb럼÷>/7{H1!>d}(/jy7@RњuV*fK[rNLq,'DG%}k`"z\0)&־*!9.1Xj#.TL.d9Ul3c)Z}q#KaW*տ}so+ĦvI ג3xSuZ+1A3WmЌ\Ԋ-@1H.;]A~@Ϋw_@RRk"$C5QZ+4TAXj! 'Zp!nP 1ИM JٴVkΜU8۟{["Z#9neɺ84dҩh[І4Gۿ_M :F0T`̿Ko)W 4J(ּk94Y(_[.XD!=-@#Txn.1׿mǬMBK[ځCB'dHQOCy>q.pw{VeTM)1ڜϩR.m(xլɵS4Jgnu|ލ֠5b'XAr9oP#d>E H* b\)*2X %$ڨB/2'H-HpTD@ (X䓶&r&ϪuFQM٪\ -6r%d13,HXF*Ht=n%Ġ:*D{RQ]J u;aj_e«P4.CZm`0`%Dc[1PQB m>p*h-䩇 TTA4 <(GNTbUlw%>̃dB'ҹ1cmVSlNc`[:)tŅedC&XǦg*){ p=`8ic֢RR.~P6}JC.(("@pl<)O-`RThVM`6Ͱ GSYpPGJG(AiTUPkK:j"(XMӴj(PB]D#wl҅Q!ՠ|ͩ~+1l aA0zJ$> LhIDk۸YfFeP$֔AE΂G nB(*?%9 0+ChB Ls.%@ qSY(i+XCeд~ζZ *7aSѝID] QQF Oڳ; JQ` l Br!udlB)vNQ"D5=( EطAj yTTg*CB9q4JΣQ8Pd! Pe؀How es6õ+|[4q\gz$_nz1!.UQϘ5(NcEeթܥIk!pB@`oc;wvL<1^ }Zs}<UȖG; LМ62 bZPT8xic PӖtd鳒S rH \ p1[;IB'E"4E"eLԴ¨&#`OkEJM}¼&H.:t_2Hn)G͐ B@8Md!*T?Q<{Sby'qm,93|Y͌ QHkmt_K<Ey\US־,Sе21wPd Ť>&J,|˅#8´DM| $$BoL)X1z*! 6#-Ch'Q uY@/B>Kv%A펁.|I5DVծ%x< ӫAN}R:$W}8XTЙ$ fJd! "7W a2 1!tՈĪ)ȡgZs,@`u!gѝk\#Ѡ8M`m@\ 7-[Q\6 20ߤvBEAH@*}AO#OW .!)H/m-Ѧq@y_:χX".e8vgZ .] ݤh5،ѣ\CH#}kPprY:Z-ZڴaΣFjAK72eDoGC[O*3B& 9HJ eoŹ4iRXN `X'R!J HπO7 ,X뛶u3,mB?M7`E]1("Vs:Pfj 5r?$wQ^0 b^sL 1exĩGu327g2^AN lU?Y YKu~9(J3>>|7)" .;z!@ DU.L`(k<0hZHjԋaBj{4ASQAz|dNГzO^I'aٗRm7o@:h NCqN;iS&pUj.CD!}N1)%ɾoAp$ Bq`hꢀ+9"[ U\2#BrB;.:0B [ c~W_E[}ZVn)vmWEkKCgtcb k%gaNˍi#h}Jg@MJCf#eU ~ū6X|vUﯠAE9.}E5Y.vMzqs0z9||q H?48[jl6볗/q/͟Z3m뛧PǛ;6-~n>]ݍoBpŇ~89٥g.:^=yQ/mFBTNHՇݚFAa4.vr}!a- V(~ .\dAH"?ෙefYF{SQ|81 εg 勼mRPS1lΝsNY) heGl^JJN zu!qȧajEza ;uh[Q-m(d#?3k"ZIݰ@R|ja+ZZҵ\K m܆0S [d}LZ8䦏v 0=2E#`Ԁ)>w=ON'QeCijH컷?g]_5vIH/e}پ;y(j^M^3 +jZBk鄱:@_ߧxk{=`Kdٸ{p^ANNV>q2 q0 ~"ij5  ﳟ+:WkGTb6mhoM?Nf;@4n6+L.hJHVn=?QBN ٳ; ~i64NO;}o;t޶f]鍶m6}5 t(QXdWT7uY(-Ag5F$!E\&CW+@ˍ;]J5`QV UҦBW|{Tη +A-',!ʤ \K+et(9řcxNi s]EBWshߝ@:JRde]E'CW)BT*tj3Pm36BWA02Xd*µ2`D)1ҕXEڄ kN0dVUD(UZ-(M~yd&2ŗ}~STP?E 8fR}蹠<ݥbgs2OTT߆OְZ"(5q|!R((, ^}FXB+~֎~Z<8 ֏IwɭzZ>?O~~~_H/"Oq,myfH H  3eX8'ʐPJ#dA% C VUUE阯+1P-* #GXjH!9S`JK*Eplİ78è%2jgVwؤ75x:٭)&n>{ݬ)'G/K4NBG eEseR1#D;/J]=)eH =j+hߑݠ| $=K ؁gg7z>7NީJ2p(\) KSSs;Ge*j p*4nWiV[I54bͺc%.PM5 =m6hw%te.7E\n]›쥔eӇ%_)."1bg'_W&K(c#jyBjRz]|,WTǺ7lcmp:Uuv</KyMn. Q}UC;&[n:|i67qi>5feԱ32rCM.oC6{]swFfEj]G~+`cA,|\U#u6{S}+_*M0~vӤ/:_\U=)_Ÿ)oEuK?jH4h0S,Swfgwfh'CQHw<_Ip5`u=K̐hQ*hq^`Hfoyg9ƑnM1_x8@b.nԕ%Wp =:rmevurŧwnv JYƥ;TW ⾲B (5NҺ}j&Iܜ{nA?ACLsC!́Be {aA \ۣv iVXcrC\$]X!rUi =ox6Gz ڤ)XGlf+g7RЕZRq'D:]2ڈB0CK[$'f8RY_J\R9[xUm),uC$uP%1BI,gtϳ{5 U*0W΢dC#5jw㔻ubsϲHHB2]Vٻ6rdWyٳv;lfdqv,eHml#ɓx?Ŗ#ɊݲĖ)*~URD XkZ9_@mb^"sHGD%IF։eAd1tFΞᔤo0DyD X-ed5e@IJB)bc-)BSlV(_h"tB' V0i̫bP0I)CU(|IzDV)cYz@E]M18Fe&ZH*MY"aT\2LrYv M&r|3Ɖv@':0(Nrp3;393#0|)yg#a מ5h|Auah4|c0&`.M ǒ2A{֪CEB>8nm! {|#x!ZH&ab .Vk7?Bv$V6rF hE~ XYR_MItu[ip>{@Kխ=Zb0X6Or+wm~n9z7mA #6/O _~dX|qڋ[[m޶ ZǛmiIqKY: /guMdh]Ⱦb Dl"FՈ?K"C@[tRu'P+%1'3h @D!V D,`1!hPNmFZt@ʢX}!#]1cIF ;/t:Xgo3+)M[i/,}Z$^bg@^[y3At}ZȂl8MOwD;SM,SaMhtf!,%lwwK1;=A6 ( :rh{6eEw }JJ[B0'/r!%<+@TSP0T!KFU7!J1J%PpEFmAcsL5ˠ t vF>[Ğ]\.ЭY!~\ PwUxjFsQPG->HIer FI^G MJUDoȲ(Xꁚ|Qh{'bVҴ }{ly-k=o:]R HqXxA  tJpڜI*)J 1sblwL]N;R[]av%|=AqTO@ܧ<$.hڻ{of}Rx֙’RDtVQ"W4iYb(I FJeAG>2! fs Ig V_N3R5Y*BH׌WC. U'1fm,6+jY4`8;6(k;}`Wߣݲ4+][AEo FՖV-g4߷%ЄX ,l?um4Z0{M ;ͥ0Jɳ6y3;(_,4gZMXFOH]]gi-uz)] tŽo~WVI!JLBPDE@31ӒyB5N*Ѭ-N/ݛm̜8Bu ,~k_+*j `|"Պ\ FJz z{[Bhrzڕc3-ZͫylT?jo|5{68@LүQ9ϭ7s{@Kmլ|G=wFm##ymèuۋ53bѸQ_/'zz3b8pNQ=!YFVK>-ylye N_W1h~~1sOw54Zc[kWO}]>q?f۲Uhi OWyt2=-71 /d3_38 袥/wRP.r+B1À)'4ԑ'I2|zk 3+>ҢDQx;љxy%b) $dkZn=h0jC1ZU42 ub Q%P3!x4?(՟tssTu;[ǖoQu=22<*A:]wu [{)'6M~5vCM26 4AI<`sx;Tv̓ Y (dYFLqPB zyXK,CaUS$aĝr'7"\Z6#d5dV΢b%J5Vu]gX&NƿW~&x@ӦVv|juQ\ !f/뛬 ࣇ1B}F+|6+>˕pe&/^މ[e׻҈/Ws~69@62$YxWh]4 FH Gk҆} ?JG $Πr9 J!] $xe铈X|2t {"$r(A1'Vƒ4*"%1T|T]W쌜=cA Ⱥ|rƄmf 8a[BLf<.{tҷ$袩Yu6{O׋]xǗOS7;ITm@sUouL_0?n? nnww9=iqlrˆ[6պoBY٣[-wxь7ϼϚg_ 3=!vs;O#ps4lo_M4'2͉O`N\HjQg2%c.0e|]7.}w/$IEz!6`l2uuݟsҽZ 62zU\YQUN 8KuvhHFyc%LXtSh!hK>P7"P޹O.'D|GDfhƧ-~8t\ 0CfU2xB , Bjty=KdqH0)'D)](A`IIAaW&$spgqU}/}k4:\ܮWhJ*iO}1 "JdǗ+[>&!P|ѮZi]dA ұY,IEҮLm ცޥS5}"[@g+LLdbMFhs^@:µ+rgT: a)Mz4CɃ;9=>19t-OP|dR~Z /vK`"Muu6Vf EFmֱ>F[.f_PG vߊ"2G zM*2m*T,l* c&@ mEFƺA~NR 2YšJא$ 2SHƜYGȩNuN3qک_U` "v>DDκEfGDlcll!%IAG>;8,s;Inkޥ{"!pL)6I+1%g|T2p+@PV:U>tץ3q)! l\_pER##\`R! \oEu(pEr{늤u" CҰ,J9"i}+R^$\YTBU`ઈv(pEJ}"W$\!SL `ZO"f8K+#o+b;_jnBQ:hvq>v+qNXtIj4]ɿ2o򷟿f߬N4hrs_I-֚oܷ:+ƬdoU>N`V0(i 'V0jcsf H`+`ઈWEZ=\hLp%890*UԶH#\D3X3*" HZ-DHW/ ]\)!" \imH 0K+TëEvQs4bkXeIk UU3}XJJw~m؝.>,.~U1UiS)TsCa޼jϞr_f+/"ќOCDe~vaj{svg-ރ7g L8Ô4 *p:PN KY:~<`ITU&7ZEp>ZJXz/ig]iy,q#OqB9 E2L;OV -xhD LDI8KɽEF }K{mqw6L[.Jye%+N>^+E c_M&wM9.SԯoY Et5a%fLёl̷t`}f$/Ex1;NkȳVBvu6Z3^W# aRiATUZSMcQ]J4V9J\Jҥ6hS}6Uu- r(R:D1 TJ>f1HsZeA!FKD2zպ}O%2kDZZ,ǫ;b5m;`Mg2@_/G=GWmLh&+Q!Pm|ѹ]=8w;iq?UZ;QA={$9{^9WJ6tR)y.+Va9r#ԣ^׫`WSG:ߖ_|RH ȃ&xU(NBLL:h #"Zn #0<0dY:%U `)l'v\ Us" bflIg~D VeJ?,ɒjZB];4Z$Use!DM|f"lXRFfL仐y x2 =#|m#l3b[Y&+49B" `J9d2%# ҈<5spU+ όz%s7E9s{t\%d9qnݐ Dy9j}gZXEY-:{хPzQԵg7 H'!;IAWuAߪʰ9mblEu%f۹.^7>8? ݽ!ɪ7݇h0v?Έ Gn]cmR6=mlh״>Ss4%i3:dUIڏ.N ?|..t?̯: 96:i2Fl*Y~FþkacMםqRl/)-#?65: {RRw7!jD夹RYs3mˊ! oqZeawrQhl&A/˛jeT/?-oһ{H9fyby0K2.O,BznqE~xt:m|`̻N z1РvsE i姳*N{Ps{C`tNu]J3ޣcXu}[l;{lpGTPO+RytÂuDt|V]aZtP+ R*)grEG',KNҧx*b LKJ"AP<ɪ3|[.7"bXVI޹킿*6~:&mN.1h6Fƌ“͓ Q;o.DAbߏ$_=_Ap; <..ND jETY#TB!IT24ҜɑH>;rc$:I1{9gQ Hl 29BJZDd-[I`Sft8-zZʔǒ$>!$-dʎ fy%7Z &QcJnh=> kJS8PJ}pಉ;,^( H^3MeAJ1O+IQ b v&IÀ' x,/" 2K=+  :](-O"FDU%D.A>Y{[FA #FD-2$9Z ;Ux@$u28x9Z@3C<Ӧ 1tg0+HgC]`'<3)0@&0PZ+DPLJ;Z85e/?ߜm7genLbJgz7_|z 폅}"=M\%?|4i5VuIӤ[0I~>j=7;lboQǯWoP}Pec"<՘HԻsW!o7r:vۑlIVMNV9> 'j}tjK߹y-W{N|&+R Wh &t!zjBqL؍׋ŤOz vaĩю0 eK~n".^%Ho*I&J ]U)w9$yYrpj8U%pw[=R☧ԃBq_iJg"abܕ!*A (cP!Ke΃tQT_tY͋2Lob4+};ܵw r;_6;XvO[8~ݒ,zElIl5E⯊ezWM]F(Cl^o۸ȍ>P6ZG9"l=}[da`\8B\גپ Zᚃv%a)/*ʎniIe.q\A- @#bZsXU:V:XOXKr,9d'j9eayE|t|w޼ooNB +fa*P=e;ɻE)O( _} p$;3Cᙥ2#'U1X`X d QMh.4JF%28cUFfbO`1r7P{:#_~(x.dߢm 3G3VmwX0ntxD) 7ZޔK_ T1&ymTIX lRp%N\l=5ͺRN(u!U\&= `hB3n%3ɁuHK*M2;{H-R8ҲᵬhomG@bBfA& ½%`)XjgsbLU Td'/@3IFC=i2H }~l#:o]tͺ3 PgJ$Y% JMMYq֤ L:R/]EjsɆ o<:r%8b@kX-x%»sF:$dI:$DXҊ  .d^f$Ai*R[ĤܿZ9[[4v jJs7AwSg[Z\iot8%I _F,m??0\$yKzyOֹ,zG>.ΨFe e?B9vuM.AqN?IdBpx=o1ZqzB$x:'W a;\hEXOT}ZΟGCxX?ǻ޵mlpe6믑D?po}T!|:ZOɨo7xV\Pz+sͳy mTi|F4W`Ũlɏ=>}f+%ި5?cM "ЩB_{}Ň?!ן/}xAx/>O? LI0O&/O"?e  8|hC Rrֳg\^r}}e_lm6z|}W6..Z *' 5qBQF+OXYNYHޣLy٢w|6u헛ouhWPJ5o^H8~TRni|%4VF!ReFGYc+ZFu N༬c!:B4^(mhp PXJܐL3P`DP6ђ09{D>L6=Jؒ6VgN쮴:㾟Tx>{vp;L/]dءBG %*B=i{AO3Df *U2ZT7ʺ`+&#hFA#'cD\yB9FIQT@PP# Sj͸NιGFuMBw护C ]HSOd8r^ty!%YAVG1_b D\vÛ>>o.oRPi̭TJ[Qd+>D%t( C>ǭp$<:&_@GDÑ(JXsAgIu0Nl[(żs*Pلqjhƨdbh18A qBD @<Κ 8Vh1rto9*>۶kV;wۖ\? }wڏ>>49D@3I!)t3O~~O80|7fʟ\_mH06J+I* # !D 3M !G_V|籉MlYQݻg_̲ yj2A2VbɱGB%[m 촭bH֢WV oqr[dڐџS?J ݗ}%%-;)!\D(?`rVR,_7ӕam1ؔ :eru3ܺj[otfG۾Mز–Ռ[v>g$yv~F[?x?}x~qRꎎs㡓qNzK5ӟ77s;sZ M;3Xg!mE5S_QLR3˞8ujϙZOsr]W_)|h. j;?|n³"H\eS0UZHZ%&rʂ.7tˍR7{1Z(a`yC,xA"4&V8t xPJp!SB09p5QfEy:剶6S_=Mcg}G׻7P1h__fD;Jc _W6ǣ[!(\P~_)?5Q(̙6` Ireԡl$'Ik&&I.Qh`/Yi_ ")凧@,9 [>%jLBT M9ڠBW7&18I%䙋\%c.5ri X.LϒBSMKh_:97j74:`Bb|rPU[=%rc) NL*ﶨ`j.x2ZPD:q!iP eAz  rx'YT2* )!C┡>׍PaB;UR*M(JKb얌J1YX3,ԝ,<,\qkdd!_pOOnM/~۠vПL߸N*)bN5e:xT0n(#nWp Q٠ e3h.CVl`;8bŀW) u;MbD"D<J[bQTSŸTv`qt:bG\xd 2-bfR6DkI*i+:M*E# 1)L{1ĺd p1rvF'͂T$b18P"RRV"$b'7划T(b!GP$-HJd69QmMv>QM Q9YRpjq뱔 kUTxޣ&p Tx|JR#UWP.N!|d,%EVX.N.vrq+PTd+è&1 lPB D'+EţœPy Cv<<+U\xmNp y?jhL05YjG|@)!|1%"x"V +፬rLy% sw'iNҜFd-8V d1$`  ha4dBY#9zHB6L݁">HCB^dGҠ} I$F\XGdoqL սͼ\ :^ q֒Ot~q{ToWXWuFqOotח]T+Ja 6jRX!ג*SةTr֥SXi(W$`F^Rx-*SO>^RtW.mi;'kMlI{[##7u8Åj!fCG"Q7eI๑ec`` g^"2c` ũ4{m7ܿfK X!/(?ػ6$W]/U,u6k*, Iq߯z"J&eIIT<-ۜtuwuS5]O"AkkY07Ҹ,Srʊ. ~d\v4 Ɉ+mMpgͯܡnXcuyϹZ*W xgSij$ΗEMQh|"5qۋ\zCJtIzgNvSYۛNTR_,ji%5kz~Ji'kNsY)k 줽zw۴%2ښgHB ǜ A2S踘D%RDa &:N[ɘ`kA}NJv]+q@ȏ 575T%FiIVpqD:ۛ/m:݋G%Ԣy7gЗ"Ed*mAkkl\V,t0؈IҳPJ5fr mHVd'Rה[sGbO!HRgD+>x$ce.l粥 EFT*9FWj,I 65 V)Rc -((A1r5 $BN-r>8/)Zh~ 0**u2(CXKH%.J&D j4{ۧ56ɱ;9LIrBRRlC[`POHq!ޓ@m.m: MqtrqForBC^a{0|,_D0vSАF1DJeP))'" h'-mtXY}B{[(A'\Y#:NȬuAFS)I";Ңdt{:2WJZW*R4)%"Zޙ8] KPqt`1J>i(+HقjQY>qe^/^~1.<_Mr2 u8QVz"/_wڐ۔UP,Zn-ټ 癠Fi"t6xM$N ײq \Q9J=yb^TM ;I9ҶRsࣰيT[v-t+$*ƛd!I.J]HS5IqX1t ;Ppa ڇ! Zg]}wu_/vԸƾfu5Vzv ǫυWIhxA[U`/x^ )K+w*xK$rk&W1.bIoG)UJئ PRhdTBS^HJڣ*էTm92U.N!5F=*W\KeyYb29P|z`>"gjKZYxՋ[%o:J<)0˃qXi*K,C ʪQ HI M.yْ XzS] ja:׫OC +m-ۊOCIga(]l4jzZ8p C Q~:tF fqm7bU G!rY8Fa)oJdVQK4.yǼӢp!o bxdZo]Z|԰96}\pvK猠*q{Nnku[_WNǼ\#adx֥*Pxfk-5?6/?~|mͱTt_ku#0˱&xl,w@24?ܣiU޲iMpubҮ[t۾֡. ~Qf81,ۗծ9][[?J |U <.JRWRԚb 0 -A-[ kGCn{IQHI)TzOHЉvbsȊ]f`x`x`x`!9屲K^r`DeTx&I cJ͍H2U"e3\V@Z} m9Ecy6q2~_i^f$ ݒ8G޼2󓦛O*2U~îREP̢6M Ҏw|B&v#*(QUҚt2#,.z7-okGm<P' =Aڸ ɬtA1i=f8Ojzwp6!HL#KE"8)L@TvKrh܎,o>JolOX~\itl'8mx}r2<g7=:Ҩ4ɐ{Bx747Ƈ&%&!#Q"S"Q"Q"^P" 6d3\b0>Z^դɁHD*!% ԳcvŽf PU¿[ᴩ\ O ɉ3ۏZ_]=|& Pv1/lr,<ڣiBûo0b2`zxH~Kx11 0ˀ2P ETBER9e CKDEI1&56gHRdT$@-R`8xWIcfgt4y#@~MΝ܏ms30淃Ű$ǣ'/VC:52Kfl~j~B 2Pz+NlΌr,֊!m8s'7n^/h؊1\J _\Iь?{Wȑ\ 1|f>V ^'[m swD&9]<ͪfAhD&^2"2QO}{l@){{Do((4[{l^i~7w{}f:jeвEZv~sݝWiIy0v{;恐!|OwtQK+]f 91O§-MY˙Y{]l΄+XJ$%3ё8l]0F$%AeX*8Bp 4$ah%7FPeZWo n'-:M6ڐ˴urT.us_y{-cV쌃.x PJ̅Qi;eR> w\i8۳l9=1!`b`:fPK,rTTqG.I5͌͌*qac+X. s\<{Bɸ&==1ߦ}y 7 麟7YeX4Pfb-*-VU{-t5wB7< T\d{8CJx%f6 163aH)MM~<+*f[Xֶ=k/8WR.,ȭZD%S[[$ d4D@ Q>Rip R ,EY$ƶ0bcHFYFgĺKBYHdA PCQKώ:8e.ښ7bHAPB&RI NmVrz,#I%E5O&Фi**M#]E_lcy5̋ŞJ  S7ƬEuQXdq'-x^l8y|Ȏ3PXy:q }9ڗqք |ŋ50ȚލRZ)!BML"XY&C᝶1"r /zq&GHj[iZi[i2N{V(ddH)^0HDP8YtFN1e@7Le0F} <a4|zq$aN+"M4fX,葒ύP1̈O"slq-|OSt7W É>l3O~[u:̘ R*-z!ݕJV1޻ 1˂eb:~THD1<( K},'Z"ÜL '{V)sR7C/n4*j9 k(WFmg3\Ƌ?ǩyh,\t1lIpD$t1~|K2y4_c* 뿆A.y~yM|n~=_F59 ;AS^mk^GbѲaxS X l"'jGmǕct֣.$RqJ=H1B\60 2=2'a7E""[x-XשFdڅ]WꂜX<.CtTmCTBş{ELsRd*ɭ5Y'Y+ H\&3Lx;1g}Yzk߷EҫmUE zR^󣜣ۻhB)*Ix! pP֭|ѹ~%KZ] m@ b߹wi jISb'Az-xeJ$ f^ 5;43m P6^-j)6>2b [)٢vi0_xﴋ̧߿N_38 cM r bLU$J,dQʀ' pF\-zb1d> c ʐSʃctT20ıS2dy3f;wG3 P_$pG)-vWcB #zC@!DҌ|&,t]I$:DAw)zixƓ]쩧zzykt4Zf ϵYZX%lL"D@)҆*R" M-zd3Oco7/MA%B.&$ϑc]NS['tΖdIN#@ְ gzk6hQg?P i qِ}|QL!8Ǡd QP)$Gc0aHܧ%CO1Iu->=bQ0vpFq8B FeYLߗf]zLҧg=ZueRʽ,o݅d#_)ֽR38n(;wO Qs}qvnjTUb}ѷgݒpM]؜@341'=C <مx,)Ӭ ϯW@\7!C1ytP= uq~tu^?8MO̹E/w9󨞧 ~6=3&|-31*sEw J]Mwh6st'+6xY߮RxfVdΠMoV=3/38}*_=ӬeEGMX6zͱ+J.(5\e43 {EXN̽iMFRG ӈ!}2!<6k uY2khR(9acODIFebhNeٹm1MǶolm[N6\NA\d[r>ۓH EJ+civ`y z 8 "EOCڝf/Jdp@Vz }V>kp-w7s "+|g𠇓{iVM /凅=gzc䆊fPyXy7 _2Z|;߃̞&|o`~4qF 7)t8~qm4YB3dPzc2Ct+ϾNµ+@kj;]!JzB]!\ JBZ=]R{NzOyhZ#M!R.Jf[*7%v1aL ?Ã׿`2͛FMohABoϘokk+hH ap,I'`IR$z>va0-+F83tp_u|O+DT Q,OAD [ wm+TNzBNVQ]+AE Q2ҕDX!JU+thmmWR]0e3tp ]!ZiNWRk+͘P]Qm+DUҕϻ "`S;µQ-k Q8tu=te^]`EcBLtm+DyOWHWk|)4ްԍ]74a%_Q."5޻x:y`] tpfhYx׍i"oPAH[u{ʋ !_l=_gSG!R*`H(fa>~ ~UI9e( CӇ2hꊈ([H_w+XTs_gUa]s9{攙~f;ur96jIEfXX-EaN&˄'M2Y d'I+b:vtBt6go,sF:Z$*`Nfsƀ.mզ.YPPLbe4WFtڜI.nY8寙+e- 2Pږ, ,;i|+l.0Vtet( qC]!`C:CW9B U Q򞮮 FX3tp9e]+DkU P+!$5]%tp ]ZNZOWFwIV•]!ZzeQZҕDS!7EWWT6PZۮ䚲6ݱ]\C:cB 2xtG]Rډte$Ug KWRtuteKvltg JS;Ue2itu]=ޠ3^l3?(Rpp?{WF]0Ю[_J `a KPHw`{N2͙qKklKjn{NO7 zɫw𿾀cU[=b*z}}OwgۜߥM?GpRy ܫڮ ~}^j엿<w޸o/on/5/C˴P#SX4{[s9:DK8{c\h@ñ[쁒ߎŎO\J)2{#KҊJܺW)Gz29q<{ i0—־ELJܳGAB]z1~At5p^ ]U|t5P ] ] +0/9ah;v( ] ]Y"jq1t5b@ñP:RBW'HW#EW,Zsj $tut)z +ap-ptAIkmot5圻pb z:SuutW.3Da)t5ڣWWBW'HWQht}^]E#sGNWQNW ~#tJ=eqvo݃e4Νz#o屇^C}pKކ`%nt0pLciGKRˡ+%QZ?_z" n~]U:9../LjDhpOn鮼?w͛;|?\w(^ j{П7G|9'B_O:S`?vf7?ty~N}D~Ի-w.Ww@&zo~ΫۼWNĻ's0:e;7jK^KuvO ;jbl;i6ͻ_'/tokCe>,h@EDi؇Y8 U ~seymvGg,/;; t#q|? HmUۇr`_`omΟؒ.F>UhQQ6w݇DP2? ;o^ow4~f W/޴Esut7ow~&{Tk-mY嘜ѕ6()PrQ+K[g)[ ͨ3R*OUqj8\uFw+ΪTX; SMN ґd?4m7juUQ*buXQ,l1%ęm`bB$Z;?9rk5T*@U2,F$ki Fdzs&E=g'bjC ߽{I5UK;Wjsh&Ŷ 3J55)cĞDi{ 63KAPИ17&{PtlE{))ϕW@xGFV'LQ&fJ{ІlxGT; 2G !Ʈr&;'c(,VQ!(_6uqQEL~n/Si? R!]H{ Q,sr>lc_|\,}|:oNB j[k*uvsDAuAr=RNZQ c40'HcvSmg w|CyӦGpFex2P6ܪC:}S7 `ʳ-cM(m쭮L%OFHY`(.Lnhc+[+3\mM  ؠ l7{Zݰ֞PѺJ(q5 eQ6K;;xNJw(PJ#l2c}Mhhpu 8R, |A\Z}S 񿒡2 THtB3Kf\̼޴X q=3U ZYq2M!P ! @HhPET&T "ݒ{k]ѪPFt[s ƒ= n0L AC\-f8 `:ut&;yYi{ l*3;(@hq`jܸ-¬3k# 2}_m (ԩ0۸+ъU_ JiVgt0[z;Y7 &φLJD2zN ]hQ"1Y CcFXZ}FѤi!X_L. S!\w$O`3)-2աlNTmQiMEZS{I0вzbh4vMfcfr~0wӫ"f'+q E!9)n?TFm?T&ͬ]MA;h Lb.1K;PJ{HXgZdl+TOqXW4'[k' 4$\Fb?IU^ b^8ؔ'Z[T1"@uQÌb;{l 7؃ U *m6*R-1Lǀ4욆66)عYi6>tXv=H՞K g&)Y 1K1ej)Ƈ=4=B]fm>΁ lWl08vpڠ@4XAq V:.m4EO0zʤBjiqf0Mz|dNYƬ i8 Nr\kWՠ[ 1xzAp*Nel)nN8 bC1+ੂH28Ĥi2s7L\- Х5\`].?tT읭3!ue`j?/B?_k`R.`mj0 tWUCLCK>œӟ~L@~5NГ*V'ST ~7goƵx?^m%u)oA/.r\߮=է/V]]}|[,xzg+=4RCx@v}h7~G6nnB%ITهty1nzuwK@I32?5[*`b\ Jn1IR@Ƴ_($N2 d?NW$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I Iv@l$< pZw0($ tI ygIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$N8 L,) Ĭ^Np^Lh\> NI@Ь$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@:$Fb^P7.͕\Oq)IcO l$ tIgZHBSc!`$X1nwܞE JOpDR58V/4lVwWx]h*@%DH T"J$P*@%DH T"J$P*@%DH T"J$P*@%DH T"J$P*@%DH T"J$P*@%DH ~"kh`1\:?|)zw?A I٧% .1%p {Ӗto9hO8Q\kW@- ;+xs \!F+9vB*)pJr \-+zSUG]!k;VW[ILJ BL+ 3j:zJ JIWZ] 4^'e/M ?!\ab=+tm7prBj "Jzpe1OX`WH Rk^Y%\W z3p9i}Pq ^OWo ތ\C_10*4d.pCw>ԳaOE U T墚MG9Ҹ`M(-"xTy2 :y :8Ok,’ MW+?+%F$MZNJEY.Ul2'IĪDr [@ %d2jB8+Ma-6CHy ;j xPº/L#&DI~Fl4*)Ȋ h"L3i>Xrʤ`O>n)x7o| $r-4O֊9y #;O8@ХZ'B \!7pElGWH%c!\)\x_ J,p LAMX__&A806'&jmTkd q~1,W`,_lW8˛DX}YVcYҨg `zWH}+FEرRIywWs+Vjhr>W as8)ݴF 'XǓ[7= ֧۔?*HW2R g0' U!Ab T"WOg X7B;:鬕 =9;ghy|On{EYYʕs\`.l1S#CAN"HDg,@e{hFHeVא~gt4tm޵66h/T_f _*w0.RaIU `*HoBL&z*AQO`ٯٖWo+:tx /W߰K yw1Okő߷uz}6/WGIyE-iČKF[b0uKҪztXMNOCTX:By6 gfE e`#`b)1HpjIgSNz]67&%֫ճ]딌LNp0GPhXq#Ձx)ni:W=ͣO>b{%oL7/7!'3H ¦ZZ7z")|7ӋIRfΉOoϧI.ʀRHMeU1)(CEu:EqloOyNlO-ϰ,3W=H=4Mۣpfs5?쀬WYc+7LpqTIJpAECĿFNk%h|f4^]8lܛV Hq0EZ|`׊dynqwID\|˶.8l>{0XxkZܵ]W *{}jZs d8'ha*`c7Vm:kVmIͽ}GK"ݴ'|5n9gԔ`ڬsƍ(<&xHJg#ؙ90c:$X 6$ %şbr,А!ьKn&&t,󸠦ix}ggsI@qY%{s8v$9\wcO{A?#GOԬf.xPJ̅Qi;eR= w\i۳9=ʘ9e= sJ3L`}G;Jp@kd쌜Ǒ;]3,3B1/wܚg7n&irOO[[͓?oPhe4g9bgbr`"e&0L2RmO kE)bsói@cEF3MW"jf`c362 t؝8b8ǂڝqDZ-]4]/8PWR.yVG-gTRN$Pf2D@ STRTdBK ͝"`}3rG/ .֋c:KE1.҂wj  S`#*e:k1G jM^flj`3PpEphĮP<!; _:sЋk*w\~|#GZlDһqQ+%DT:3›J++dӶ2FD2#9:GH{D8"4Qi/@l%J@d2G"\Ҥ6p:.X":j6ubEo1o6y|9]_yI6Zi1/GJ҂>7B,2#>̉B;OTUQIsX+ `ni͙ALI@F<՞Jb,;A]` qg<.KktSd}ŒұS6nP5QuE]Q`k4~vK]v޹X8tcbeЛkSSbu.(6`4A/i$ L͝a-|m~_'_[.!M=vu@9T-/- f˨¶d5lP[ԩ hST*eZͮU*z2J uuGmPe"!mu/UezdNnDED [P$73):SmejWzKJhiM \$چ(Щm1LL<a U[k4ϱYԳrwqm?߾'‘?NỾnjl!WwNۺ_T7R (<,Sɕ/ ZM=H I9e<=FHG%!圂!;XsW]}>>s,.m |أrr2-",~A_x>R 1Oτ6R1#D(<.8eYng2]=[ :$i'5gh^ ?.֥\Zo~*э; ~)IVT}mwӛ} Gu6dx}]BahM -, if M--wMS6nvBlޔǰ]&6fQY utOς7ZEw/b fnbM͠?YfLb"̺ ̼e[\gO:_ףǙeg7Yafw=]4FwbeSb^ Yu^;0"D^+WbzfT49.r`82ӱW522j'ztIIhC8G㾾h^IZx$HZcDr!tT$ܗ\Jp94HXul9[ XJq6v-b[.bwg?n^@Czڑ&4re<-`' $WAu娅O8޺H`G@-hI0#1%YT$铵4L4 z şXSVtϚ*g Y\;aCYW̒^q"3m֊yMP=&yl垄%l\T FPhg yNķ&ΖӔє( D#:h/j1ܫ(D:=X&uBEY&Jwd8XЌD#]"# ly >bS`dcY'n}pA:.)w֫zaTAtt% 2\ ǀP k3D@E&{Qd;;&a,E;}eTFh mgP<;PY: Ӧ -Ɋ&;SYPR*`v`ȼ܀ɷW7dIo 3)Hc.%#`r b|2,;F{nzLP֖QhɹgQ;'HGuţ҉8Z [Uxt{@u2z8,8$0fLyCR"!hS`ZVyk[W bicj@ϸ2״JF&6/^NY,GcН`9WRSI8$<,[0DJP\E&xU_+8`Uk dīī%W⠻ 1w ǜ=8?]M{P ;IGr ߜ .JW;S9rU2Q}NQ(e5]˷ZsvBZ|k|<ח-S})#S!c+Dh\2 LQE26% e4V9<]ea9RzDHX^o4.H߽7Mtn90~4 ius_]Err-l[ZsЮD|~vH@H&#IYa.&r/ZXu@i$`%$A,XD S R`Qӓ_U gBV& YøUdA=UԚ )<&6[&lQwoae2Ӛ8BDTK6>yZJgC>> 3c>+y6= BpTOygV= /꟯^LC-^~&P食j8}Z] iߌi?MZv ynz%%3LduivZsmE|5yr^u#C--X٭ީIM6Űukwa=ތzڦM9'忼9'j~w]_LnFt}I^ޔyoz^mG`>=joF QGÏ/z3/owbe[j G8?nO$j̳F["C-T.)WDi\<co>Q>45l`Ui©(<9#hY(00y۱]e/\dy #*dɻW:Xr7Na-[Mz_ MOr/_jJ>:JlѦ mfOP3gұ `ALg^ƼfxAGլ. {Ԃ_Ö20ш**ePAQ˰2H"s,oGY +˘B ^.(tZ!Y)&2-8hIY*شUй7hzOjC>߼FM_WKƢTLhQ u:PJbRŤKcSVꉴ&-[W/8fqߑZAݝu Kd0Z~ymlBReRxR8 8⿿,qWjK&Kx?~&N<4zاM}%Sp c8e=%5uNέxc ' mW'xٵ4]>2#s&h\/n ]ni@<UIs~1:槞 S:k_o$Q}5׬z'z.gQEc⧓I.r&O\ţ:+.W{+47蟵長n&^/58B+G/ӱnl$?L|~KR5d{Rz}OWM݈nofy#EL8{`rٿO6'7WFP5&9m?^-ǿwgo^M۳޼;^woS/4%Qd~y)cYGtmw-Amѵbm>lW ~>{_mlHnn xwk[Ԝay _!Iͯq/ΚT;U hzGz׍6*x$WI\C(LFЎexZ>hLHd6d >VGՎKE~gbs߮o󛯞;u,>Jxݢ&ZQB*#|V)ȴG&2LEQD1 [wK)ztwOgOS;N/[ous5odh?* [Xi}>m[Q\9d(+:s| 4ZEgta o102x V;2A8L9 Dʙhۭ,7Bu}*Я,=A]s.Q?#Mrzh㸢Chӧ| δ`(ro +~< ?NP!oSg}rYpWh wo߼̫tF߹}'|nl[QQr~LWRPT,%P$Gcr2~3d$YS2ŧ)oRKjֻFyurBAorQ0h6*:r]˽rhy>vh‡<Ȧ]Ls咃'_%Us{|&j{W%|Pi)Xc8|RUXZn\PTƢ￵P 1% 5W됉+A 0a}_s""ʻVCEwp%BK Fkl"j]31d2$;4VptJ.%%33pRCmJ } {KHee,IzXj5qv ʡ|˖+?q j$ [PŰQLpYR)& Ix0͸^kbB\ msyxK#sWxv[ӓxv+m^ͷro rh A'V|q=3tRǟc|?,;@*Vi4*-s RA< w\iX۳l9=1"`b$:f"č&g"GEwJ2Tؚ8#c{JkXؚf싅e,4= $k[^g@X,i457PxiA5qv`?O=⢨K{iɾZEb[=EFA)ڈE cVg:-8z\<.vOlK;C2vc}\ϋe({w53?Vi%Ip=62H@2d?Z[ُ ٝu)$I/|$"E,M:j m!`eZ,R\b+LU ቚHT gb[[gDȹwͧR֣ݟ|atɪKs[\/Vw~XH2! iEB )I .Ȍ$2' =0C$B\1SG?-+'@+/hFQ'(vfH-JnۯU{ )ʞ'3#Dk]'\^lSCɿk?vAwѤJó̞6Q%_A֎Srh)+fwaO~0`n2n05f-`tE5zp;'{&}#u\]j7~s 1Oꀍrֿ:x\r1L/fRZis\ 80G5}ZFJcBZ_!iЎN܁vTw@;q{r8F`(2 aQYVs=Y DJ#}JF)m6>Bܾb~o;tݍpjN5{q4OF0`Q#A'̍9dlSf0LI2YXkՂ'{URu%.IVg>QM`&|Z3&*61@kowY_c">r9STds\GYbi'}rkM*G¢Ǒ/4T#rU I0ⴹ z5*ǬF9uO=h?nNSb>\m5U"a1=°d q L["ֆ>n>D\.J]r$^x䷗,@gո .xJ$0AD!4I!%- *&eT27="Pk5g̱0<!E0Nj$$唵! ~8s ^|`1/ N,8{;'I"H?.;Mpg+Fܟ,&'8r'!zf4Hd|Fn u\*f$,끧Η籇FKC2xM5{af!A6TٔrIejΤyZޟmt@p<3r՛H>G N-ɒ(UG ְ ֬~wowuעfw?pO9EmGv#݅l_jрAuE:P%LuS|bэ u R4e;޿ ]?D,tJ5LIƌ|Ujv)nSq;fmW{v3xh:7<]M֣̅p]U7}9f璦J*8r:eZ}i =HHZ[OY~M3aۣkېnz\mFۙ=w f"~5{egz9obn>^`qE R TYgU[.l:W-thx__?ՒNrjYkNtH J <~W) KԋA)4j1Άՠ_4/0^~^ٻSi1ŸtRXΌ3BlZ} mW3PnP%!3H[n9p1:?j;᫏Nm wJޮuu_t/^ʿ͆tB}fpJ+Af%6+HM1 Ɩx j[\Pj>ifoJsnjvd=MwxazE"~:nbߪ<;42|6w0zF]y:*F|~̗RGٞ_z:;l~^iet"껳f7h:^wXo !&B6d,J61>ْh?-Xeנw N–u R&pc ٮ$dƜ(!f 2* 'ɕ~+9 dϙ]$ v)B1hmMY]HdI$R9 Ziӽ(Z镭.-qlxq|ڛ:M.N/e?+!#Hr}Nj[&&&e>"%Ј̢23G} TJRX4 236腐9 f3* FT 4U[h;x{fGx}vϚT9,8*Ѻd45ܗD0uFZNJ\C`0rۀ l7^ӍN&'{y#!-CJ!rؼ 1L VrCd`5C,jD`@ĺIJN6kO% Fb0tBmmHYCY!Q(BA2B>#mTZ|N\qDͮDA"ƢBpW9.}Wꍜj}*CKw#{_4P,Z.-~#A'CV);f&fNu7|BC щ(wTJ>mvжՌ(m+Z䂨rRAT^B> H%+mJ_(H+JQM 2ːҞ3"&y <@xԒX]zs=^'CgǧK$tyKn_>}}M\M ot*sU{:vÀW !Vrj7|xOY Y-؜oPZȉ Łr_z-{p=,F*+AQt4HN|*B9# 25M*-{ŝQJ`]ӓO;ҎIFF Qq!`3썜89WivOK+,YR.~`|)kBKF bmYN,B< ?UF1;/s\#_KŘTD!*J I$J<%RnBKc&e&I,>JrP^!jx_HI@_Tn-W!˘H,2T{y-cȄ(gJu%g hE&7>؞'Xenq ڪA;q܁(Ƚڣo =8s1 doliɤaO>1|P1|1|`1|B!rf(2% у-A9[[Gz(1t쥓&YEPsDM=6]vA\)狶[_\|swpy1eI=L>h!mHDU Tev޾esd4yKSW >b}H Ւ$Ԓ*|*d+_KRiPK+% wXbHy@c0@R1謏qyJCa`T?f$߸ Cɤۮ,͞YE=7{fi%dcx`}t,ըH5!\38TkfMC%fvkeo|lNg'>6S1[ՄB6&S&!7)ttb -i*r#0QZՙ y$m KW*I Iٱ(TKw 1r` %S}Q ,+RUTd0k@bo2fNnlI2Nb[E _\h{j("$)g#SPKbVI 4 eN: ؋龴'݋E]Пp}r G2pll&?W'ȲkOVRFzMֳB#E@%jo3Srɳ̚de >UaE=G<հ<\ _jLu<-Q,OɘFk Hdc:e~Bjթ88rVkたq|<<>j[J }"`\"tT-#O8.+Y%_Jjvz’F~4[wԷd1KDxb7oO|\"=H%&$ZC(PP9o}JBcp6ǻ5+gs%( j*rPGEwpt(ig]h˶_hwcՀ|5n?eXν j}Z¹>mn8>|Ḧ́dd""eS :xe(FDtL*' `{pPxy;fW bx9ŐA@Tclq~ŵe)l'JOi9UȮBQ1<7*1Bp ;ύJ@d' mwpnBX}ح;o%4#$nkA5)L]7PO 3TZaL\02>|{A%'1J $HbBl'NO;g2^4^|mGhʤ`ȖRJrЗSЅj6 ,TjN#gs Q&QH_*d 20."S#j1iMIn/ڜlE`1|ʙ#=X>.Mn{b śBֹZZܸq\v+"~H:>7uB"p9l`]v>N>M9^)uv]<2 ǣ7̾|3Ŧ Sj5#@! ~0MSaSxM3fqr<\NaL;T$W}Yo}u=ay s \J0ԵurD|uCD%oy @o찹Ѽ-u!xc/#{b5\s~i L-..K? fw5wZ(VJHSsuyx;?n\]ϑYEmzm bn4j;hLMN% NuT3(mHGhlhK߭E'"9X.O[~۫A?(|tvGw]K? '_Wg U3WiNC{ܢIh` [Yðbió&x~",p1a<{ d > Oc}%1 CҊ3E <ꅦ[χ}&K m RE]~ZJfz-mW3pnX!3(P Esbh4\`3 kYpm`׼~[^=3 sW ՒJtn#.?%jF)>$)PA(ss&.Ǣ$;*&X勅~8yd=|({ 7sczs؇vezD%ЯTG/gyF^\n<<{{^lf7g&aܔPan~zv6=11{N6 GrI"Ux%-G= =OLFca»قM쬞dG|SWdr٢ ){ (L*v)]ܻ[9u,PӖ-ZOT ( B L3 )2k%$8 *mh**!_WjoվTj}z* ;:lx8X{cP*7v)L}\ovg}cFZ՗Qk}-Ck5]_v ܅_G++aH2Hs+ i-'ӷyYF^ҴibE @;/*O{u ڨ vTm ; 5=Yu t1>тc3f4FQ`-*:oyJq@7t)7 fuYK`{ڐ~If䶝gl[ uHS#}YB'J[JO"Xßݛ`}.} ^Q6")'i[!wÉm\(tכ*零}M8(m۷j G8z8O?઩W_[0L-T9W++P@ݛgg8m/UZڷ\ xVQ@ &+!Pʈfa1':Ԯ @2XpPgG養>0|֨ 1r˖kkRݬ_Ayە<6/sTĢ;l_XK?ǩ '0ˁz c8f1y}!ZiDxu+ZNja cʲP VYHa߲ƞ)eGA@tAK kl;s,f-h,CP S5VE,J P̶[?I)NJvC,G"2NϬJD…y˸5rƝat7+ۺF u7<9a' Y˫u-jĺ{ri=A!"%Piեr9hj)2ڔdNr1t͒3B'-}ϴPpGD "HSuw%tLB!GV* Wk#HV&y0%fT!e)"7ΣzRErU3F5<uIXTR*K|BX# n]AJ+Z@2%P+{w Y~k#ׄp3w^h<0}t%7 YNgԕ**2VNQiKElu x-߄0d2ߣ=WKl>O'"Fkeђd]6ڴKP}֖6Z "BD1GYuɃq"FNV VDyky@MR22 8x92LA}ei HW]ߺx2)GTRyU !p\u, 3,rEi=(8CX:cPbE<X+"z&uR$FtJG((䌓 &,RW+2:gJSkdK6<_f+,q 9,%_`JY2Ko6-n)sڊP|Z*iU2EmrJ1RK/V +ȣ7 !\-5'VX3J`E̙߮rBG&瀔V@6ǞRr%LumnSM+B{ttt$3d1lj[j+6(ʆqUF++VGW5*)Vu<5E+HZ8!?wk@CLFPLd)ȑ+.$$"%Cb)*XRxZKA #U)' :sZ8)8Yu.o4No,F΁ [: ahnj$ثvpcPb9`, VGI:2A6RB*$1+y XI[Va"^[nuj'.WЀ2v躐%,z5 ^K_Vp⿅ ~;(y'lW5?BAkQeR%vWy:[xBϙʙS!&:2"0&PAS72)9D"*1dhͷ3֦}Fd?{ƑB..# {Y'9x \bxH-vW=Ç(Edag]SuUwC,'4<,Ǭ:!)~rʐY2)[(ЏQzɌJpd\pA;?9_i(iS? n/5K ćΤ_RJ;O f"!IyT|(7c]yUF_.5̮K~O.sk o7*^Z1"msVqot1ӂ+>!kBs9M!8<]R(Yzݝ@T?7Wϧ霭isa.o~uΛn17HH?N9o}>=VɬI=]FTd h(|]z{ox8^!YAp罎J'ltI} ciz2"ʝquygd±Q!ԆF ?ƺr'vIޟ~曷ߖ}oߟraO_Ӭ'1F׊-Ϗ@>f to~|@V߬ki]+SO|~e߻sꀶskm@R~8~;jVf&m^̚a:Xvm}?u+$6e'uⴉZnK9[H/S !|u) ߖ}3,\1FZ&'''qmCb2DXXt"}PXO3FHk '8IϽ0i,s-ymғj4(pIٝDec , a_}5lM'^~λ;_n!djzdx2~r6껲Vy,3RTA _Ot:p >/1DwNGF~0zeX_Dbx@ ֪AS*;7,$X9E$ F{A1JV+csD8YJkC-<&tuwo|]UQMQB&#/ A)RhΚDF0E$.nqxx$|m/m4jy=[0x.%/v'm{o>S|LKH۫3n,WW= !1J$Dy7J U֪$C4:Mw5?}+iĞ c'Y"0c=Efe"PxI p*(Ad47>w!+qJ*c<^]tKnɌv=m'y{~󅁉eRÌ6a1q). $Q+ʂ؈νh1ߌdsԵ̞'4SC+NqrҜxUr]?S기}I.4Tgó"S]^TJGU}6&)CdDp`~Dz~0{GY#c+d(DR% K0Z:Zbf $ zD3I@rp2Cp\)rɽL29qw%('[- 5yÔ4臐l5 d9LY$/|uNߺzK[w-cIeW]t ͨf'aMPfx=wB?)SNop\3ܔO'WgCK'UYYJ)HJ:]{A/G)磤Xd/̄?tÒ<^Er PNF̥BT'nAwK} G#'qC*0{kN.54H%ťݺfVSѝIQ (CYqv{=.1y6u^^bgʑT4r&n j3e>MoAa)ҰTOQu]m&O߭H}}XP~ej0aw fu{/ _.V%ל5-aktzmk!76Օ/aK̭ߥc>|./wWݛit~Gnp94wZ^Hᥖornԇ`u2gF +Yz{n&۹O2}7Us[-=W9D/kqpuJYiLk퐓[OXE eo\ UWY(2+qOg+q0bOuƞُ8nsVh)崃yM(|/ !%"@;hD#9"Yx-̊{> &CR$F SlǬB9$3`[FĹànڭiǾ;ݤ:'wM*' JW5ϣbE'.V0k>ɾYNF@XѣѐC:HA5o{O5q:U`DlM?vDDEi{D< ⺓4 N (p'h|Ɋ+fZEDg9c+RlZI.D.8RVAA"i@z*>15q` .ؖv슇e<@X/b*۔w~LُEz iUR A1Xro|5`yG'Kv\?d?,'08?}Ui}7YsRߨ1E-a$+tFjw`Iy>3^5G_M;@7-5$ҵU:u9pL|5ks/C3U 3sk>vb:M㯦W~vEͮv'ht޼$O͙5LaֹuH0CPD-Ǹt)SWGW "9W/^ \iw*Rj{? \iIyApU6UK"W$%08o5.SGE-b-Q)W;8Ab:NuũfQJy+*guRT&DtO1u=7܉r$r2y@ԳGyJ4K섒)G\W%KdO\$w^Wί@?V;fZkbͬw+'W:6 _=jeMP=Pg#3EQawP5ǣOM!D4 9ƆUAP7tesY=aY&I7/?||z??}u@#0I&"gg 3n~­֭~[K[Xͧ 65y}]~X̭IOq(i]r;k+U0'$64?%ZP*٭T". TLcC?;GOIbwΓ2>h .i,XNdo0$#Mtx 21I^pߏiAόȃw<6njJ!@k&Rv+`eCN/fZLDݾN} ީ?xgݝW:up i2 RùPu^"n}F&2ׂA(RmuBꎠj ;؞Rw>:0  HVmɛȢ1Hs{k#CP{YR*u)/uT,pϭ1&dI&jd2^A&ftH硏 5pܔ^Lm^ Wz 8b4 i&93NsWRH{e勔=i@ԯ۶i:uk:\U`mf%nBC`5*(oJoudžĆcOHVC:$ z  4[R,/RyD )$mJǶ1  qY39wΖn ] WB|2(59@jp4GS"7zeC/102zhOηΘW:cdc_KC܏-d-&F}tl;n*'2^-%Oٕ#ƑE"4F+@䦱&Ƙ4 @a]-,)?*+q/?ٰDGbػ](Hc➱P\0N د-HHK*`gz~ n2E?1eGb-PPS^7󗳛/O9oͅ׎f{y♶nuʷ'?x+-VF5כ]|<9Ro.<{'SI^n9<6^kvm ƀRBujzL *e3D|hP7ot2vJܞVtwW\>8t0\)n(ƞgKQ@}U.i]YgcJyFWO/_fB<~iQs4lw=9GYC"[ur3CQ n;UVɒ0K K֭f&<3&diWՂ'{ru% $3f&I>D!TґQ\e &f ^ހW[{1{Q|!fD ڥi˸U X/8#Bz20V }G(D(qJ.Pr8B,{HTT56f}vġ\b\ff<BfXv>sy]hC;opGMܽ`lbzsꖦ 1}]/]d-̤dr}`)IF9S7#)C+JEiFg˧<$d$(^EL^K͆ήZ#XF4C-jzS-ϯe0%Rd<=+o^Q(3dZ8hV!y^:U@ Nj=Q1fQ&\O &1a"M@"G͵t+pTmXM͞V iƮV{'(u7=eh8vM AkPVL6*kXBhTtn:ݕ{GWAVl~Y̿1R&lsk5 < 7A:)< g6tAxk'ccI} ~}Θr2?o=y~Zh_;ꈾɿY!IIE@(cEeM:&UԌ i:PtyƝc 8,KH~ V׆2jOǀr\ 9s ^|1?܅;w`ߜX^1!pE+Y܊; UWKTUnxB^k<\l%ڵs?ľyʌ)ˇ$Eɢh&\L pfLwN½ oƥ`1b!m^ c(, >sy.&q5k)o;.5jAPl? )*&5MfH٠ 7{|K-M~hfhp5~7y}=CΑ#Yڼx6^̣o.2tK8[sbXunt E\^VQI^n,>SI޿ʄ+adYgOd"ksX3c9_%j8zٳٸ쁻J1Ak|ͩ@̽oK^\0rdM(VS}- K$q~"KȾf}]!8RAEwEp=V\^ Uz7d :'8fJ*A~29&Vo>g)7~|H?^QAub>}`#|1Sf% YʾD9oylz Oa] :yMMhj7swq{ƽˋ3&Ii.+/A_.6 |6jl&oYcl,y87MVW.ͤWGa:90{:fC=j '5kBFUJª_x`H苂k:h~@1 JLTU?-Zvu欸:'hW 1~0VTD$.YcNܥ&4Ej-qMu0u'm<' K^\RM/j08is\Rj%&,)QldJ1#WByXؘ6NxOt6jʦ$ƮyׁQ/lnzZj~ 5/|mոQXhoD|v֑4-]UQ "zq4M6 )ބ m *VP@"JQPͭ$G~ZeSQVMA7K-ZײWJ8 ?3 wzE@EH %@)d2[ψr M(rneZGEdQ0A[R.k%#RHDcuzj6 5Pَ.ݵMeZ4v6EA h݋`IHL%)]'8-^\.'Ƙ4ktN=a;}0ljr@ێaZUZUMm;/-J hEeh9R%O^o]O)\{Xf_Rq._~h99,śPa D&d) Rg*|Rz~x\D8V @LIp:Gn~ ](uRvաOwPYu/exbgQy3|{jYY2v@rHcPG=8+9k&`[DW ?tUmVUB)eGW ] lQFu.J\n3sQKfx˞h&YŀsqDd^>>զ9̿sLrV{K, R|{vs8:MoXc}d.92Mov(9oM-hu4cΈ@-ik*%-t2tJ(p%n]`Dk*-tJtJ( )* sLZCW .!mҰ?]%3O[DWK mUBqGW'HW\hjO%k ]\Zjqvôj6eٔ0Cro@%Ҫsso9;kHE>ג܁exЄ 3t ,\Q*EvΙ&\w% 3k gsvem9<y F![n5aNcʠn2pm)UomR*|Hg&/,u|}VIzvp%k ] ki:]JL:cJb"n]IL)WLhܟ%ø+mҮEӮN4\SWUB{ dvJl8"*A =zs&kch6a[hmC+doR7lKwzLEtm{rtJ(P,j]`uJh9m:]%vtutES"Jn ]\d[*ittUQ!t KNZCW mVɦTututũ֌8CUKX[*tP~t%Mt%'.]Z)H Pwut%_O1+aYʎknR~Gpi1>5*U fБs0>wim.h9F YiW߼-&{Sm1m^PmGeg* Ւik Cm g/m18 Q&ZX@93)So^?T{مy36鏒%ʫ?eрhE~dЂ4 Nzvz:r'zgAI~<:ϞWSzV|Q r{Kl)sH"ŏ^eo:]մ.4{ ̏ix%ۖwM߆lПL"X\§0gW7Ao+u?"!Ne|Jҿ2ǛWo+c&O+ pEԷ+jmRC7 j{I~ѕ7!IP˧;6 a /CO* C3A]3Α-IKMI4m%Rj]&1K"䪪]^u^?͚%Ok<{g[9n?Mww=/jg'Hܬ}9] [~Y_ö<f(&EqFnEٛo7q;'sks;;DZݘq,fڭ{eCU6P3z, q @k"؏s:aNX/}*R95`٨*E*][RGk];Dˎ1||"p||L{:3uǮjeg<}ߧg}ON۞~?iFXqVC!0glZ#J{:cWCL~ \`Ja.߃@atQ*jGu~v>|+Pv{ZY.>Emuv/iAڂu浿ˆ=-?30CFZupLJ"1?]%Aџ 'քNgH v;!k> ٗMuZM#zMAV'DhwI Xcc$ XkZ*66Z/NNsߜݏN_}1c7Jy"ojЊܙF#XޏiEK v}/+~j OҮ]=ܡ'O@` ne4DףJt\AQꄫ#ĕe]mbȍv\Am Z:D>cĕlWl~jWĥJTW̖+ˤ8Ga鸂Jpʱ n$wфap%rYqkW:`#^-0:+Qq%*=puXD0ap%r0ڠy1*DKt:W8kW"7Qp%jڡiq#ov޾q%r0kW.DQ W ŽC(vK:߭qgFa(rgnm(f`[=1Cy/-yVWW"׺Qp%jZ:D%pu2 @;/Wkq\/ǕJ'\,r~ \A0qAky\ZJTpʑ#ok3FzO:B\y Fv% + ǕckqJW0Š5Ǖ#U֩Ad5f\A-Y:DWQ30D78b.\AnP|3(jMX:D;]%[3-fZ඾Z3a;̯[7=a~5чpE>4ZwUګ1Qg*ZeyOSqvda6h;b)%]R/[ZpdEx=mp [襅C(`/|Sk$K ܡov ۃ>J"ks˽{#GmCgYɿ(?EI7)O(7/ JǷ_иtBldoMb@9Mm-zz 9.j~Lހ5gWH_;7/Yux(<[X.y|Rx|-p oL.,bj!SR+{yhB0h~{eMhVr%F=:v+cA4P:iʹc@K>\ʙT{K,{ٻ0Z%]\>o R!]AK{Āü JƜ9X|3A'rjݹ9)dVbr^TRs!DAu)'v/Tc;M1unE"NԌMEWRHHHI?7 ҋJSj#JK99*  PkUeKf|mZ>W QU;)SͰjpoB+"8 78c)(duDaHs50ȂȄv-n eJukΚJ/nz1#.UUQV̺HNO*D"Ӊ1Kcœ݄m:veU|'WZp `c3.s x"G3Oe~nDw;(!_^6R$';V;FwX7ם6hfѓ2EFظ*&{lwO)xJvHu5tk*ZkM!%8OãkBm3Fn)ozfz4\iV{ұR5n(a/[øb樇 |yB1C[ŹCfI,WRs Sڢb.1#K;@O'SC="֡7I1ƾ8P|?wߡyE#xQvk޵qdٿBmRE b%`PI I 划P8~T:u:fg `|;I&Xة %1<*!`'U[CzJpst?x(mX)+hGϊb М"@KkL0r#S>ctU g3byPĤO"t4.еwHisPεp^`ԀtBe_wpW8@"m@û2֟ťp-+!С 0_= kFД@0F`3W{9=?i) tQe(1ZR [b~y% .DksڜφbUVMy K1&# $Ȱ Ē`$Ri`OW$Tv#Doa-ٝ`Av BZ8.Kax 2_t#*1 oBphTҩ`NU>T 2"|ë_`C7X.LSVSqEӏe.__Q.hq{ih,|Y1Pbq=^^ra/C .+,~l%-vqlT߶_5|ۛ $e1#cSbЬao $jCˏj3W ۡC9U]8`jy'l}@25LN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'П d@T>} p>ox'/.HY9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@R''t hHK+茜@|DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"':p0I pE hPrz'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@~i=g]ZoůW'k,/Nr Mo&M #Ƹp%}1.!Zn\BKga\+, ]!\BWV,e Jtu>tN0v+\zCWVlMDWgDW^X7]0qSL L#7|7j78~L'Q^ms8n.iOŎҼkR}^S3` =o|7IW%}ʢHGUƷ^*򼕁8˺"1*cC·IV4Qml >ڮ:hF\gh|p6[nF0k$6cX8ֲʁLm! *ᆷaq){AHF jwnzwG(v6E* !5ҡY:(\S3mk f8!Xa OcvB[eP?O*\Rrz^gϲ:X9 B#;#%̠#7K.LUpO)xPI\|y) 6 5Rp4lcf ƪu1qWĹ-QD',f\ X,9lBEɡ֙~tu &huiUw yŻ,u5s&D](&)T2h)44\f^b-#|#dJQz8gUmmi~A^i'߾}xyDfY L`Nx7LSJV)cL)j:8h*y~9jd1Z^lDzY&,e]p Z,@\ [%/`tz9ۥ շpUíjo@s.3pl,x5lj:Y Ӽ<'pA*q;|/OJyҁUavkljҀES{SYdU,dm<gMJ֝4AX" js*GKT6br9h^dTZ^\>;WQF-/%6Nذi>h(ў 1^AP`[]d"gV\QAX'S`\g|1fLt55L.g@N6ڙwt55q%T9ztT.)%nuRY Bl9x B`yQ9tNd2Et0+A*%ؠaXX~;F B {t,o.hq|^K p =w~ V̧V|z\V5yoFO {(=CJگ8Qbn_ѹX]vm^5. :evyonY旞ybתf9-o|-7fYuovYfuOr:P7})tU19OL<b12bJR>I%6׸]ܩv)]Bާ6Uiyfe0_dp3B.@ 7aA8)W/deCq oȻ=Ro+~ӵ;خ;G}K.%b~yw}>Һ;p=(h*c<{W:NÄ>}.ލ?~CPZmܦd۲vȬ{&FklIiҲilJkzrt6Hk\o}u,(M s$U QT$'u2 &;J pg_w1'+A|*2f VAP! *xҜԚ )҇YuVԪ6J$X{\E k,8qdp}BIBg<0JVCϼ^<5)9HPdBQs$H煤7LK~˦mRfQOq)b`TB'V>+_ $uI\h;ϱ 8uyAہ y}CVwmISS% 來^%W!1[,u:gjWr\08V :9*Л.c:]QHYsLhPx`O,Oq).)@{\|^+V}Ti>Ȼ/l:m,M Ybt+`y!2z3%ċbj/^-5/P޳ىWL=^1%S7=7p;O^ nsCf1 *==êS1u2UQXr:@ b9 eL銯%y:K RLr%TƋR, E1>i,2KJgj{ȑ_6_owsٛvlf_ֲ8_֋%[e%VjvY %'g 7N mbS3s ȍgL=7tO;+覊nVnj}(!ZCjT@bf%I!-smpV$9& $|sݖtmaOzD1[`a-ul.xm6e d2x hOV g N@0/Vj=h輬-yjQZISFt Bz%\Թ<Ǥ1}V%K9tX,ie|,>tkYkТl2 Iā&e!ec59 &S39}Jhڿnپ) Qd`,74 a :ΚB=MR`@? :gc7 K?; %3??n4bF| KU 8iot[c1%{/Me2L}P&íŕzϥX)n7'|qG}2egRN[rʦ ǸɹRrOdWad2%#xuWͯ-HrtLdfq-jro4 x6tVӺ2 ן4/TYl%~]ڐGFՖÇGiǶOI.rG'L |hc֊ ZBp|vjL{ZoSmv7ӋWaf)̕Ab6ݍ i;m==նnnͪ|p8m`ŲoٿOw'7޳cuBF5g˽ؿ u/t?cg/?3.xhcɪ*{{%c6MuM-XO=}M>~NUW;\[ $7_?'3Z}Q[VY0c?L MjY 0Λ}gwRūTKy@`a]KIk;}u4$?I4Oʆ dʱ,D&`@a=i,j!5LHd/zН QBk{0&rB YD*˴AL JGK8XLplN&8'1DKMbtI%GFW= X_zEbZ{F (Lqǔ K<X( ]2F@ Q7+Ms25&0lx]\L^˖lᗪc5GjibWÕ6KG}܏٣3Ц۫J .:z%tdj( '3r.?(;\h"]3l9Uv*]CyhN%AЪBxhUxhU+xh5t2d 1h j+`K~Z`އ@Č]IFJ,t!<NBPwluuG",5 c]|)dB6Ox3˧x E5KtE*Z| 6&)^^^>M""x/c:"A`P DRr-2arbIl6 zD2I@rppR$T8rYJd.s"0+qA t1qHBEf 5x-/ȲCt&3ykNAU=uC dc_8Fwf -ϣ>-:]Njt2/pNeq*i͍\>>gܰ|j*HVNPP$9yW/krHϱMJ`&&7j4y~ ҩɄ\*ا$:fs ͯ5L@ '\錎:a$֒?h/nVuX_HQ (1_sV !yu_pϜ]4r7ӵnڸ5ɼu@3y&纉[2%gدH;\{X]x& %Y~xxDwWtwjvi8G]8Ѭ\~εufw n~!Z˫L;a<hoquw3d!3볯>l@uc{O#q_ܺon{&yzamnʆ>LZ;m+*54oRS۷RH;#is&R:Uj+5 QJRӽv Q4B#A*@PYPho;880bK88;gAG˕)gC iR)>dS>2OmJy+9{VѨRiwbbQepLG\}${L҄LΔ?ʦ>z- Tr4q@̅EkH`w9=Jzm6|&U7<gy\v67;g֢WtaɁC|94~P`٢jvKxK1ulB]/Ud9DIhb2Y!ʐIg(Jٌ$f(eGD/?%"h!*cM7%ǵ,6p6'2]9׌k.~xGhU~|G #A3p,A `0.H/Fig ^jCk{VZFmMN)ؔ%dz0hI&H䤹s>j26&㖱9Ҙ-lL3k ]övp-yzS=#1'&q~jyg7T~0;YgF\Ʌ .smi4]M0**1̓O1lvh`2!E1d!1dDr̢Mg7%;ilnL;b]g;KU1OptI͓t&HĚ[3lD Eר=JU4ʬFlMR#̇Ubj43yN|ƶXf-⼽"vqq AC>'IJFBܻRHB|&YΘ.DPH6\ؙB57mI 0QtكUE _:ӒEѰ]]N$90$IpJ+3I9E^2C_)y2 ]<]l NlJ;keP<5s&U .B5N] Sx4WXph +k"Nf-[AtbqdZE8P $w- t- tȬ3L[oдU2)' ) '0@G>8dt]2yfwϬuKok}_Wsˤ;YLyʤ z *GOY2dBL0>0lqu n&%Lrd^78K:]gp&!t~y{vX}qo|W]-SJZc! 1˂5l5݌#VJi -t|=hh#)1*bYdrFA^U翾hԼORC/Ҥw۟^nz;W_gTM>o߳]4`r:@GxOJ_g1freF =g# 2HAO e%>38ֿM\9~۫/C"&f{z<r9 0MD4|lKZgHmǕetΥT[{0HEc2NCGӠʪ#(WKDJMr*br, ;T[F-,JpRUHL(;2 iNd,8uF⬭-kĐ:ErZ7U Aׁz9n"'JJ R")CxX{@zI9&PI+&+U୨UJt*yѽѽ {K=[l %@6p XD+r"Jɳ03\W/Jm~&%b~ aw΄zoоzy}W};涍,k]CdSSLM݇Қ"$s@-Q&(Jce$F7>t7HWظAhe|ж*E APS uQi(FiI I #T`4W_ihڦuqSHWPmTڢT~Vp =0d jwYKW{d.^13O->y7[G࣬,uJV)+[l\j*%qU %]{Rؠx#N٩d<ޙSjoK)BA҆i+hl M2ўn0v棈d#QPO_lgєe[+Y.Ea26Hhh)m] wC:xU(J7q~صq8Fǁ:C):4 nHKǐ8n/fkUGҺki[4 tJTmҢ6NHU 2ȺL4~jGx\C(5)D{*'+[^uMw&^cQgM6~:̰Xw/_~_Cb:^-($z3J^|5_D߼Lv[㼎ڰn"]P<^-;xSlZwENJ!ٛ~?~[oڳ,.W\\]d\:{W\~[t=~g_5?[جp?ofV9>N\*@*iJ-:?_dMK~V&̪#{}uuVg_Q UL-!mZZU*|u6e\]vG=X~2VWQ̳?_] P5^e?Һ~}7HnZJ FoZ捌.UV5E_f/݄梊~yN]£ nMӶ5C_]>\ ËYyG(IԀYJli+.躛s,:"՟t̖O5퇵?@a\W_b+d>@}#ܪHe]Ojwn^HClyWtqz-N&5wY{H]||+mh 2N+׷H׻.2_&Hbybq$m#&/VxLk֛[GίVGfs`ؙ<}d㹇h3oΗy^7˳}o6|dͪ 8@>ݼ?BpzwtmX_/Oy=;0m縓98I:t~hB=<)m8*ZfR`*4]aDR62A^GSM#ݜ;BwrzUT3֥JHJ +J-,(9B#E%u8–:sӕ}/E,II6Miѻq{d݈HG|џ^ٮz(F]4<#<s`"/hst:O9~4!\-lLO'>:%c62"k QsKĶu,?~+\{|swG(eҵMxVmP%q$ljuhwys=D>%)Ц!8CJ;h<جG`G6ľQzV xFZ)z`+u͂̆~N `&+K5HWd+UT"Jt5E]9*FB! ?]Wuue ]3D\<\tEĮ+ " Y6" i~0H&>]U'B Kڻُpƾ>/iyw,{ѻ]lө ffL~r~?hαr619bg͔91 PO^nmMcizUc ~\MrMtC^J3+ipArDRm+ X`+uEWHuE2$]MPWpl銀g+uMh]WDMu xAFh6"\ihNJFOjt5]ttE`p+ &v]!%)e+. `(MFrWp5Z]WDӝ)[+7B }"Jt5E]k4w]S\t>5p]JaUPhZͻ}U{,rv8dž:O@sk8oZ!%Gb?N bBmqӦziVouP ӛz`S\.uAHWD &v]IWMhHN+v7(.BZ둮a&+휱؃`+ Mh]Wyt5I]JFW%(6"\khCsWi08E]o8FBKW] (]ueҖT;t lDk]e/I_ttS +6 DQ(M|Nr"`GW]!W2%)*ʲZ\4]t]|x߼>hn)pU,lmiiT7U*RPN֍ `EO{hv[.yG:ے|֝v/-ޟ.y֯ge<~KQmgu拶@,ov=z_.e:7Pk|3i"_ DWy lY(˗nEl5_e/~]|\vϦgO _ypJ9WED߻({kFq6])z|#Ker-2ffʞhA-q34{Fӄ;vA^A&J jZ f+VְzEWH]WD[ܤ'ѕJN+VltEFs:қ *0Xz0\FWHqQ4 8 ،0\FWDlBJ+Eue1z \%]~(JVZ \0\tE6E&]MGW^ x>Sk]c/8p]e7~tQpMx,њDۊIWOAd4 p)ƈ38 ${>ԕR2 F)GղW:2Ô#*9WM^!5 iL5ub:g_]o*Gdd&>#r H׀rkc[GZG^bQfB݀zHWx3?@>iЪh% A6"\%hƮ+IWSԕr)cOWEWD\"ʠ&+Xb+µlHEA%H)5#]6" i(u NQWӚCt4]3\t t"͒DWN~`+v,hUsWDi`pr'}B`|tE֌\xK)*! + q`3NĮ+4IWUЪs Z7h<#Ƞri+2f 9~ϵ) eoDe.;,/,;*Y$=;2h `^v0{,6 b4>5!'P32:Sj~Jicv>0IFF`I.8.&Zc4Q$=AI+֌tENUhCBJIWO+-HW%]n\tAA"JH@v 3$\ \tE6z]el%]= 5ltEc;FDBJ>j2<[FW?{R )u`+g0H| cO >\WHizIWѕWViHW]n`3DZ- JfSUB*N"`p+UƮ+igW}\Yd=3+^Uhmݮ!ȚaayRʸ)&MR?@`p+uRƮ+T!jR`)ltENsѕ+FZw IWԕA0GW] (} NQW` aZ6"\0\tEƮ+,]-GWk<]^WxYYZ)T;+F+56~]!Nw'+gzW,`+9=@ZgbR*!&+o8J&hǮ+t&j N=TCu %]> ]>(m2S ,Loه3H +b 5fzOD쾂}{`7:IJ34f}oV?,g8@i+";q|ݾçTJYD?}?x쫋6myz6ӑͺi72(Vf?^|9YA|;oޡ~CV~"te^]۠?I7_=|Q77. ~o.Np<_H:`ﺢm7`ϳ5^⢨Vػc^@{kG;bn{w)ys}fx.l]xM񺦺ϯ-pGi^{k~2c @P`>pXvq6a?x?# ʼ=?BÂy}Ϳ^37K$(M^7ZTqdɿu|c;;0 f0O0E dc-MJ[(g$5fdxU|Ow?d+-kvJ|t- 5[ȗޙҩ8O1K1]OWH_c~ȇ o1 ,_T2hQ¼LIYm!:l1II[ ɖ\bwSH:gr3mPQr)쀅 B+s44??Wx8_[Їi(6Qj!Xcd) V1̘S$c#5цsPM,R JXD̝,RFbjwry5$R-!R[·!nvHdWN1#%IC?yf2k=1T07T;F+PtL\zL-'@xOEۧcz[2vXc4 m.˫Ʊu&PfK ,`Ȑ!KXh*qèic^[ =n@#L$~*lb0Q=u*L%p;P9}p\@sWU ޶\sAsL mK Ar#QPA>f{_,&D209>)w9$5nj=o3Z| 6Z Ik|`5 jk}ɨTp,I[I/-@d/~"OnվX].i!GCfM<\hف1K(mt:dhXhP,B};a:59_|&THFaL&X@!,h**u(:>4ˊ#d\X "(g. TjU|mN%~Y`,S^-qײ(YWe0Ѕ- YZec`P\EEffAiݚ膭R̼L=ԠmEX6н[ s r 8lk]);aAb?!QJ ] W c" +lBGkl}w25G1껡L(MMlA%gd,ݺԠ ށ7CAw^KCw(c¶o%a iMB‚(2iL~RTcRlA.ŞU7 {!!.A 5t9ԚfBa@ Gn>dD VV r6Й\_ f7-VTLgc(@hq`jP WY6Phl ZAN|A+EP kyKaK2˅.JDAB9& Y.Oy`Іm+\+#E |fA{ʠGthbA]fҼb6899h3(r)` /I X0`o&/˻C˚r>|ߨ|AܨG&!͈ DŽ 1=*&$u&ͫ9%@ۈفj2AYS4X&ʄ<|+38Ž H H9S(ki.!ⶦ3-ChW1S;6@NoBJ @.)bU5d4ыwEKax-3YTEVhJyxH׷nכfX&*`V&[RBuhx>8MWCE axȹ;N[T1@:.>jTy!]/xp`\3c][cx hN[hcs;^Vjc R'FlLzb2j=ڕhyf#uAцKyx&haV}(i1` EHn֌v̮ZH>jaBx@S =1h4zOYs Ep8 >j@A4jH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 t>IgGM稏'@Gao)I\J^vK@kO?.j).]?6CW.ۭDөDH !y;j•hC8u(Q:CbtӘ6DWla3tdVju'y%'3+ )l ]M̧NW>t,ʓx 7CWDPGU|*Pr+f;4pi3fpo|:]Mϑ,[RW,D w;O`h%:]MA/#]%Mҹ+J'l֚WWJW6te?p3C?:χ,'Rp;jbGx'xGxy攮坑 qwv7om+,Ȟބ 쟉sA?#4K}p< 3_PM'дUإ'G GO 7Vjrt5Qtute`7DWv>l&\Е=>jLtut\6DWc ]nͨO^]M^F1+N38FbvDҕ=ӆj1#$ɜ(-)]!]yNN yC׮&\oBWmS$R3xt>N>]0d\!hɛA(JWCWQD"ov.O~3׮&t简2tut5tLՄ˛֟(O-|ty*Y`9.._r__ϗ&AG \P|Y#.7wz|[89hB\}Q ,i}7{d+ !/g߽C=xr7Xǿj~QH읛SncoMJyj',߻d~-C:\R/ w{鸼?ܾ=ʞ|dt+cդ}Oe{6W!xwi(pm&h)UE͒Lɢ,T$ [ݙ ggA=l[aUދf"k/9vª3|%gS4a?heKEyEfY,]J+IAf7(:`D[8 aoT[e`*aA($TNi4;Ql\Mᚱ^<* 6_+U=?`}7 W`LLj9+Tk%i'|@|qL;zٽ'- 8.Ғog%VztXn3mmtV m<оG4玡*s{DŽu3[BtO&#Xe!2=ճ;;$}뗫/\~Rh8 ¾u0KݠZ q H9( zŘ!+X2EQCG"5bH22} &`R/ _xd!RдԔ1тFQ@H8Hw?t8 'ָ|<{_mJk0m B@Zt .ڮ NXDp`cyq<یFox/6s}mWߕ}Aߩ Ka,X0FT82 18]4U1`"(# yS'IY,-,xGI4 GzV sb(' ^={yZҷ% ywyGSw;nwSWUO3W<뜷VI hFDAzSȇ\i tY]IxF?NsyW.`m eErV&cԘbXx ȻkEgғ&7w2mG!(۲ػ iAdOx%it #&L?s5. iИ c(, >sy.&q5k{o;񮀵NQqb֨s1rd?y_U+2%/+gKٰ 7g6i"gѦ<=K}d4~2_FWfߏ>[TS__ֲ0\ O?f8 }?z?|{!2h;)q$UHH *" Yf8 !p9d&+s٫lorXρcmtv~S/K0_VShd~:%.lDmH3PEcPS>[^ඹo4sȼY3ʬyRy|6UU&6a_wFRMLyhOEsj}r_vPIjb0-LS)L^m# 5oݽC6Ɠ۶Je7 LlL6LPn^ɸ3hަǴ75"C~ÃU~9-wB"?7oKnţVKǍimypArK(RD# D뀑Ê+ ,l]6\8y_~_zoPG[:A_OA];P[ ^W/M:/rE:XpSvW tN{͋yÜXrB ##Tv\(^jt2T GV(ޣP=J}(GlV ${J X+[ ke-e` 0ؿ>jb w MX5u[}zMs[[oRuq9E!*Lw:r49u a^$gÂ_`x sZD)3ln%9^m>1lJ&9׬uJI5a%`qllIJit"$’ylT8cUb{}F9DŽRx&x92J#"( -J))$hs7JYBE㫧!mxSpA#3؝`o's 裓;"(׃r:y^3`&t91|l=9D5`흪+:3?yiQw,x#53 *x#1A< u4D{ 8`[B7zf!% 1ƞ`olc˟AxZYm_@ӁFDmt#f2=wms«sqxuetx}^G<")(s>WȥJ3(6Dq{ G<;u8;蔣JGZJP+-wT,FHaˉSMHmݡaԥ2HMLc1HQcibP0wFݸ gerGÁ{iǀ]a8DH=,B]ܵ#,`~y~^w3D]吂9i%!`Vq#5)ID6ΐ͂DM@u+¥"yd@ҎYXLEa8z^MP+唦FF\Dct!Ⱦkw$^SJ0`0vY4aj|rVk/;^;R\&uFE> 7JEې G*,w}l q ؇>:˲a+-Z<]K_M1.: D ɠֳ+<$r t/z\h7lO M4?w/:kAaeZ{H_!Kvmnn`Lvd؉aSZ"Gs)%5nX"ݧUν}7!Dg+I$ۭT]Z^@.M(Mv&;ꈘQrbw ,Sjh9=k2*)!;2VC "Ð -'USٙ BkwG/1Jb*~6d)u!Kk:Ӑ ސ7d9†,܈-Kir+_L7!i^}%gW& MbхBbH>Q$F cd3 -*P2f)H.<%b)רm{$o QjNW -8~v7_9 d0:Mi)FRROYI` l[lldpSsmѨOP-.}}Do#2aհl(6deqDQeR%vdoE_ ؍{+E缷Nj%}@ᓍISf2)N*Ʌ3LhK?_i:Ɂm閯e6'% +5Mc&% R!ٚ URl::qD%rhȼ厤%KBHZPǟS,:<}ݼX-gJLLɼӊkV"E"ٜ!#aW^O7}UF1 ibJcADSZ1RDxfG^HqlLgYH$ eK`SPCtB98j7;ԸR{+yH_4D(~`oqb? nF%ԪxJF[`_r WO)`<cIc多f'foꜦ~?|?|>|{u~G#0Mdm`_ / ^2.~z¥KFqiɚs]-o}A; ˽^@B|tQp2+\xV?([vͻ A\|`2 '0?EOQ)`we tMBw8SjZRN7Z}퟾H+ô$ * #}mkZagEJ,mdR2M6H]`{2:;%DJX/Lk$*Iw9ԑn.:gTgߦg  6@Y8+$\&d׽9PXk> jQXՎڱ!j^ 4&QqS@M1YG(eIE#jѬSr)i RyoA<-f9kVi C 59ͤ{Ҫ幄W .w=D=+w ~ Mz2ڞWՇI]fr}: >O>Z6-}Ӻp]yE뢬VLe9U1i(ru Plmo ?O̚ G?XdBb.S"KE:Tq_jTJΎ/:Mgcʎ`-k'EE:?,kχB+KU+:8;OHِ꼼LqXtϼai$nkn&B֠<ăx5G:uK$hX4vug۫FTmqu9Cm4B>Y)в+:rt5;zi8GGN$|]wTڙWH-}wy"h}GNDUѢʁWpcp_uw%[/-gVaWY%5h#jVu Yɱ.4h+hN JmGؠ#2rlrIl|&_Xρ6BW2\XSI!u i D_ z(bRmDYAřgYWYҙ+Ϡ4]B18{ ʜmM'дi_=XkQv80WUAtE(vI]`-CW]ִ^]Vtut%i7eCtUK*pUg誠Ev*(MOWHW G]`!3tUJ ]~r*(m:F@]`w^=]!]a-;DWXJUmW誠Uw{:B2@u0*p9v ZmyOWGHWKkLrե@WuwUж^PwmH*^m$b6-,&7f3I7*L)-bv3yF[}[7MIItc2Ww* D/VOt~~}>TZRrɒPFBQQdeJ.|oε]e/ZL2gJkEj suk=MF>'זBKF,"ә˫QF QU )A堓e>1ڜ@( ugcռBV{s=c u¸NKaΔ*%HG%XxIs jaBirZ&# ~ppEru4"+TXz9{^0ڛ3[3L)q(ɟqڰ x[ ǻ*뮣J?'>jaZ{`Z'L?\k@K/q~rNVqE*L!v] 3*\\::P%g&᪇#^FIqW$HmW9yT+iLDBy x?ł+RkyB2 Wς+e9W1]`!㉮P.p H-qE*Cxp,V `ˣu6\Z+\"2]W1nb `y-t?Zǂ+R_Ȁ*:O HU r`T$\W΁.-~-h/!h OVD"ihR"˜<'3Z^jg} BJѪ\6JJ^Vdx*)0 MDyiQO;[ FLɋBsmvttV#Ոӟ*G%VqQ1%U]!rM>5G9Z;2ấ &$l:7}Z\^/_ .t4H6Gk܂~|Cwysަ`vq9.>~%0ԕV 2UJ}̪RTUՏ.iN7ŗ=~SوfwP&$R_D*[EN;fOӛƙ|vFXwP0`ٯ %UFNE f jiGtgܝ`Ovh7d}Rp'Wn?~*Ch<ǚ^0ځP|\ic5T:pC\jFG+l;O 'W@,"FK!p` [ H&dTB+V8W D4"<+R+M"*EW}ĕJ1HhpEr㉮P]J W=ĕi W$D,"҅+RYUqe" 6BG++Y,"&vRiMUq儒*&\`j'ǂ+RkԡeL:WvO|emcΤ=F1f۽<֧V ]/uJ}?^*!6as [kuĂ+R 6t\Jz+PO3 xpEjEpC\8!xD"d,B X" Wĕ2HcJR HmW 8TV_-YpD+ *\ZB6᪇jPv/kjOZpijᱦZs   U,\ZmB4&᪇4P&\\bT&Yp;y4"Bǂ+Rsf?Z'\Wh.c `E4"]S<᪇R# E+m,"N+Txj#Ʊp'". :|tE*e#UF]`ɍg0HjU P W}ĕžǴ Hf`TuWĕ~ʸp vJOTT]\NZ\=))u& FY&صߧNi<uN^?u;T+J)41<6a1αI F+kX,"=S)8K! }&?fr!\ZeB"᪇LK!# 6:\\cU B&᪇R vWG*u+A>kCqUŅ`;" WĕWJ]\b Jiz+]:g!DWV?N*%O!h=$ W$تhprUc={#".MWQ rƴ^ůa1r9z Cz.#RXJYY,_/#dxYY E$+X?aB2p=(W76`-]>t,Kp;o=nS&*U`3cjǚUp2"\`)U4"ZĂ+Rkt2᪇ \I.S P-?CqE*KÔp<L]`t4"\ǂ+R :t\Vp\%l, )hItz3`X|\5"='ރ;uGbLN:4J v@ QѸ.'iU^^sTzs; &'~.z0-v7ex:Q~;^dմXEveQʃV!Uss=_= f)~A $fһD~kIf3|CbkDN>d 'Q›/uyKmg=}E{,f1~łZ]s:EدٴCDo6M{3G")gT-dQȭr;ʲcàý5BP? OBϾɿyu [nJ|FmL9o=o3[vˈ:0=2kQ< c,/Kq@BQC]gƌ~ޥ7("_?,f9EFo rx4]уj5~hQ%+~y{s`)xaysW8._H1alx܎Lj!~ʳ26X&W3]]_o'<nѶR2#[=ə;QV+s(/zsu*->W笵܆ jw{Za]_1zfqO^DVms*MsTr9%GqS pytSrsΤ*3r!}I(`iṕZi}!?u =I P6NvkPd ꧵T e3lf'b/h4y| V*>\ԓ5 [;yW/~oPu,,v"ŧ+1,]Of"g[kSD)?,3ScWEꋫbV!Y&X5,;'V[xLʰ'ô񭳠k ɰ<DŽqʶV}jc׸"/3տ|t*@#U^.>bj? ѡ,V ی/A iۡq1J=q(k>Uoˇ6Ɍyi1/iH:-\ݕzNńژ{M.L ?Chì͑C8Ga#FZ&CY/]T0Ty1/kFʒM [ 0^4L=k8ުvY0wR}^R[HZ1/n(Ĩvmh@|+#> ZؼZY9Z%g ߻ h-Ǽg(g6gqZL+4$4SwLg~dm)GNMVcd9s2-@<0Y>ïT~% (@rNMa&W Et?~yV;W}ޫ};5,cz,78Kru]֥#VuQ8ҍB)Q:x,3ңTUomh~9A/d )E Tp 4UG.s4h<{E- KLׂf=Rc4sSР㧆&aF؋Fؕ'+ǒo LX*$xq(<6MR`KVq4eN9#H_"ى;kmY&LY֪nٜ6 |voR }C{]GUA_yRj[z|ujbrSP}+8?}l%6o>L߯~ V)24&Ir/p`-K&z매7U'K^6w&"ؐv#D `a`27-aN°zqV2 d\f+ ;yvc8n2yɟ4f%J&jC Xzįpx\xƛ3( v[eZM^u'8A(֛sl.crjNv1QFp*Hb U)ܗXӳE}aPmu٥nsAiY֛#HNYxa]bB/>О:թC~&+*n p2Ba_<0 d'#qy𥺞R;?5AbeG{6sq\V`e5 s0/WE{Mpβ5bj(,M4Y533;L|\>\a\&|Ht:➝'|ݔ+ ĠR]q7//&%t4 ~ymAPtټF-X0Xpx ΂!=[R ė|?oOJ.)@@T&V?΀}ZL3%DbV25Ad2/$inN QCgw_7v0d$eg^<;T q* !6cd*n;!)z S |{f%ap{={sֈ :|q8+ U7A;TP3` ^Mu2&Ab`>r$(*;1-KOԔVj9*؞"gp9lAVr>?u534Ap[YR<5JWyAPn]@bd*+LTiB {Iȁn4~<*7v:b\GiNd5=5yi ;J(n'턎%TI%5 @턏l^SrB Jڌ"!si*}oEލdb8E.RRCu]$N|(NPKXEC 杞T }IRw(*]1eX!HNr4_9{{ye1p'}t&:p:^waRO*2@d#PĈZ+PX `v9K =t_C̆.gL,A)0.SEBv1xCT~lrЗ7Hb%*+{΄qY1d5\4q<-پ ELH=7/ % %;& #yjVZ45%^@.~,5͟XnGE: 箲qb9%b Qw֏Z/yf!כb[/ߊWmSdrlATм~R 8 o^Xdf:<+>"T W]uY%_ 'w|[Ưݮ[ڭe}vktXidyu7SrNrWTdzG~noV%?JI.odu|5;h{xgs,-(Y0Ⱥzy v8)sx5O]Ż>~ƚg҂()F-JNk2aJOПW߾ɏwnșq<#B1"6}xf)f-\[j}5~pwk=f|㊗a֯CXI`A& M\T&wPb2̖28x,Vōռ B/Kױ*Pi#kSmɑ&:Lf"ai ?]yQ;޿dŕ+햤V΍6O (@V歒^A9kclos9sb@pt:P',pP۴[nawjmDou%͓O/wPU[n_mR(^BY/?D-7a g?O'Jfa4icH2p&M'hVXh֔_?tGQ?Sn;PTǽc+m_j>ZC Cj֯M?jJ=Wy'^Đ֍+=4-W V}x,Xrx4([Y:cHX~/hI͗1>Z{B\k ||y %eӅ?|Nс50gJ:K65ʁ;qG Π<AOFŔPo!H5Ϩ&w<4ѿ?:v[s=q?޻\b JEPztO/QzȾhkava-&3FWs4n4SmUҺㄦ<0Min27_JٿZ'FŤL6VDbRUcI%kY_Vey/hb?]:8ZvF:6jکk֯ZtXUB BeB&Jb$Y*+d)-$Kr ZJe[:bOVJZc=,VvkvWJ)ϥvN2°2)ј$ErUX=SȓedYwAXGOQI< -C VmiA*V~n W,&-"7ZMVc2b )MV~A1@9H"h_^=JZp QvOSҷg\{. R/1(i Λ^T̶n@\. mbԖ<7WJKcq1ry -ukm[zŠkcΕ;  DmSIE"v~ bZ5m1!ধ-(dh[)nun2 jЙ~xlGTڂ\"Qzdso!Êer;ˊ>Z5hH}՟BRT[҆ m^*e[9BbԚ_ٗ,1K$q(x!Dxq%J虐_5 "H5'ʯ,R%DQ,eHPܤ~f%1 hAO4eu"bw5rȖ\rTHՀYTIh=gr%.# )J4J%MD9%o$o($X@kr{sސN/=mvKKm(pVKuD9u3n:d):cDITตno&ȶ [gU5IMKuZ8}缩2 \[d%iƑۈ +:aJEEhhױ ..Kyr^J ŲGT"0PG)[0%D]$k/s?x;N#VZd@4r:"A57!J7n= p5ڌMnEd Fi4Qo9_?EUHp⳦-4n',&f5K+N$GyN~3<#0*q5Y+\Vc] K@C@H)oIr|_-ñ JZ*(U$t2[>mnѴL ŠC0oN930 zZ;"Tdvm?U6e'MEauuړy[MSq _бtof J @_ Mo9,@M 6֩%z%[#&GIǧcyJtgywr׭]f8XVhR!!jYb "ϴB;pMrـ{/hSL[378X'f;l/"̘ .% S( :A{)Ya5W,`{JOYS۟')3LP.*$I܌aAnU\a"{J(Acԡ2wygIY{qO`Uj̐.j,8S#~nE Sa:aSj,ˀDzi:OdB$U( 2ˈJkly=K%{Pm5d[&=~k*=MZ׺L6<,w'\5wv'sӠwvdogaM`PZLtTqUE;W7$^kW$xٽKC#Rh 1i'Hw>$<4?\OG | S^T S`|gPIot|aV%^Z__ Kz7 gO SҮ)!t C6 bw'`8d1zn}6MȈB;3+[́#ܷm9a]Eѱ/!SEU.R0R:&% b(ϬF}zcMP:}ZB%A e%J W4fz@\&[s-Zx3Nʱ #XN }Q>2Y>}{ITĎv)ww|6vB0 FKէ\ O)Q A;Ƿ#EyW\>P gwOAt(R#YQ hv-M?PB8$5Zy^)wAMSLLSQ 1ͫH} ]!=cԋP8XBn_wG FҜ]+r P?ӧ8ޘm9 BPhzw_x 4F6/h1)cSIp9tCPs*@k,׸xˁyU/%eյquMƊ*VjEgzj)aُ"=Mhmz*N8o4^ݾ -X_`Nq:!'&dWO~XR5MlJ:SSݔu_* oaB@C+4Xq/+B'Bo/L{,+: )R?:|sCڙюOYufBQ X5oM34J&~-d7 8c]k=lc/fTݯ3hBۂmn:ʴMCiʬҰG0D[EW8PI8iډZ:&')'-Va 0d4Ked/=@`Ac: N<E蔏Z/Dr5Q_ ^W^oMBZq4_h1MfqpުD]RO~.q^Ai%'RZ0B:beq٥\奂5ZEooUzF86(:'[F!)!Mzn0;Q,fؽ*an*yF:mڞ ]k]lx*> mS cN*KinMtX),jܮj}Jx9>;b↊ŔA?}۴[uS `]WnCw-6>%Hx OJ>ȓE4ഷW7iIKgri?& ֗z%KVWuˡҰbV"GLLwLwHh).=g;Lj7 l¤ED@dG DM8[c=`c'c2)魺Xt$֘b1&YqglSS d3a*H!=wW&$Y [vvflcbQ{J[7Ψ!.To]a߷~Ƃ{2>,>FN 9cy[*1b[^7f+}Kktp}\zZ~nx hdaJ䙻WuE*=.>2/ !&d<,NϏPc0CXN*xQ|/WӬL(@F9 SyOU?*kp'FAK\ Ge-Y]/~E9ZD`v&D) GF4?[i65} /hjc%)r8AmfE0_~m2ж?$ϘJj T0VۋQ49Ky+*|S3'`k5eTr-Yi^X x8I9ZLV_͔+o;j+'c1>PW 馈5s=m7|*~3JTޏT 7|98MSx(NH,= 8KG'(ie4;GBQè5p~7+,H` C\P?Kĕǣ+ާ (`&K }RyC{rxf+n7R&XΝN⾺kUR'LR5cNb=IZm3J&ؚsfbyf " ilͫ#J6dLr 04s+ }B7N1 <򪿉(E&"eNS!< GSm964P?)C3IWun>hX& KnG9أ#zQB$8/BhNH*ֱ+lzq+6 @ S(U՜!W|61;]65'Qz\B= JULvY[:(LEi+鐉R0пlAWY$BAt@5T.RIvA[, ?P-:|=k, {t{6%kP)B+$),M_kΒ<+ t,ę~MN|ԧ_Sl+JDS"~eaަm;o䬚4#3-);]qcp>`%lz+;I7 1% ***ųȆFӴɗxQYV$kjtS%T4O'%*@%"XHDe2e0[߿E~nRgT\NW+i_mZ%ŝ7?p 0ކ=:wTN'REPV3z>8_(P}cU YKTdE=^efQM[\UD1 Ka0U@3Ib!&D`{oB }#$ "}?BZM [U?_{f?*]p}.-[ R1yVGko̿i1oU+=E}>1K|[ow t8`[7){+芙}tFH0Ub%cCkCۑddEQױ&~Q@l_&Y~ǧ \n2hIZՒܔ%2M R" T"(̛'$5[}]J'7Xk_~cs>t JQ1Zҁ^ߋdc:2? /7AҊ~[zP_J?.YJYtGV`"EM"mҲR`̈́4ajpap3G|xi a>ݷHNXPǠYŘEBL[~z-[׏_t ,fb۰X$-A)4UP3 k^XmWASدk[^Rm\v9ÛQWP C ).2%/jUCwǢd7\YltpC+vS2P0$G"h iR@U2-r K )>V `(,VUTNȉRAfJ"2^2r8V7®u 3dkȋպcfèz^R>xmI2KՍhFJFE!ģ{7S \p,<gepXHb՝4?KqV2A#dWY֘< 21nZM̪yDžj6)!պ6AԶuZ$Y! дGXNSذ-"[OkL w%m#Y04itz\}CDLwg& "ryIRHਃr[˞ ,dg4ƅ0I'0 ˢԧ ʃuAO2wJeIU+t*}(Kl Iw2< ,S[7^>f ^('ngb< pET}葞Dp$ґ/S18p$_ZN~oZ 5ɻ̳mS] P>=sb`dz}Pd XtS%{WZtL??mKolfӠk^fG}Ͻ|k, dO׿=MV qM.H/W+'ݤY8|xj!d,JȐ@"8p 8EZ)x%j>ͷg,I\d| }#[wտ3J-Y3G\ "N9 #%6hXߓ"N2Ei}q_]g{%_QHAN%>ygyd[Pf 𡢏%6zغTH )kU:TvTizZCR ^JVP=j:7(o~ 1y~N"՘Ќw^POƿQPcb(͜*y KRs(Va:g t 9I]hXG,_6& Wbc)9-j[iqgxOD%ݺBC3JR%R|ԋ:X4)M'(/#a'jC"p"P>9ʹ$!XnCo(,B1vcm"xp9_~&#k2 27\1j qՏ9fڼߦ#VP08c`\2z. ?M5>ݛHzF;cmA4^D&ɔ:!SRLYAq/+/OALm:^.f4$ .}vO^q-HG?`\uޖEp;?Z/87|~I|~3@_aq\aKLkG)ьõE\(v̥F#zs8/f80lhB¬(ss,ަ ~o[F_?L 3on\"ඛt6SYf jU`h*:*ȝM }Cm xZo.iJ*fAڇWb(&pZ MC,I#Z %pc(E4KLJ}j ís@^91O_7!P+^ʮ%q!Wa{[c>d{tp8J/׫BF=pB!u8E8I8qP~>=δujmBOy0 (Z#Bi/UBRZ;} a_ӰR>Xk9&Zcm)7EӴ^ZMu kï^7n{F,{OjGQ4%6'AHֈ6J[/_"ܵ; p5>x\w/x¢@'( bw%,rSO 6R&Zp]\d,=&ug!I #<%"w ƛ̠)ocة.qWjZR6G[0lm"H&Y{u{|sI=G<&L1TikC>bRI?0cBb0RV+`c!S&Q fFB8a7!$镕7ƢȮ gk(&(^& TsiePY{tK[I߂J=dɈFIΎJjyCP<יiR)T4ujX@˺ bmu2gkr0GU>Q"R¨^ɦE7~S]ȕ2_(f fDڳYSdF)#p w/>]_(=6nD>L΋/ycxmU1[j0C+ 76cF6- vdmdfk%<3jWAQ޸o0H]!VSQ : |2v鄹3x >4zja&r1,PA;ܕIH-}vO}֏d 12_0_@I'oE%BH{]~_í0G>G&=kś_[uޝ3z9:!rbU];F !zǽSBWʉϛ"**'5`\auo2gO24x-2q hjՉٸai>?,OEI{]6QJ4".4:R#בk 9H<=^4q ,7V%aq_mfn}3-_ |L_/,|Vac:,XJ \I~V \MD|OLOZXƸ0n/\[ DQ:[bފ4Z1}V~msV-jY?ϓt6SYz1}(y,d19Cêa>?x2}X'lզKq)^mnoLLf%X=ڈK#7@%űTʨPhOz_j씈p8inא'Nșcqؗ|C I\'' PP(sdڣ9 f5 fKqh{[WꏜvEq'պ^i5g9W-qүܖ޼-dBj')5e^oKS[@t0(ʹ Zߔ &ȡ8ͼ eq=!,B1-UEdroyXpl0]ujZc&iO (EҪ1*҅tV(vYh܉{bI iϚ!B {I<oOJwjHx4|H%oi'ybـ}umͼg;4+ /ul ʝ X=f?/A $V[nn;<'ƛ6 oAD5BDhvlm`M)Mvmz75X/I]Xw#)a4~b}u VuSM!>DEY! 5Qjֳ^-v5`w`Q(4'QDrv)ڌݏS]"H dJǴrDk$lEg:y\YiZ>YZ@l~3*)ϋ.Z% 3E;;q`,3,bN>н5Z/ʦF@49 YǨ^HʁZ1eUZѮÝXUGQY)>H*!Yx r'FT8wy6|}βb2"bTd9(ԭZS4ooJ1(I#DeQlUp'  -ih0Zj^1sNr!B6yc dYP!"%IT= ȥRDuc(T@Պ49t3_\3q(BP2ȹf:jβ`Hrpc8ҬO|Qq&d3QߖUjLʓYdl΍'4,'+# uԝ3w&ףd2 -r?n&H+`b`hJ:E:l+ Yz{ tƈdml69आFJ-BS-_FkGuMtRҭTGly"濫LhD :C}0Qƶ]Yn:#^Hs6ZpY~>%5((#4[ٱnC+Q dE6=s81ËFjhҹIǮUZ޶ sN (g:gF { cu]Sµ.S919`έ,P``anJ4`Pg I!}(̭YNɡZ hRwiBvH >i,QB> 8"}E(GoSJ!oC!:oPD!cѿ2qTVf`7&udW\W|6>5@;GE'ʱҧlp7^V!px`!ARȠ&H-]pxl<|#T ! vHN=A{RkyB,N AnBz i( γW3D ,TyMw.y 7`77UxDQQVbxF.P^SzՄ%;/Z05[0MVn]Uj͖z14wU31U2OpWm};J*}ibudE 2,6aE(AҤF:1ʝ:@7ijCT*Ӫ}R ܐ)te1P>LMr.ɴD%?_=t_I2ǵc M P>Γ ~ëҎ`hMڱzvܛvorb*-vi}$ !y|ƣ wa1']JUkB%:i #fIiZ:kF.L[e۸t!(`fmIdaj<9Aqd:ɅAhp{K23yҤmګdL'anofҨxxw_*E ˂6!3*̵QI!X< !hqTw6gzt2c3@$jݳ Z Nζ. gw89juʋ{rJ2+i m?,2G.Gd<)T|1T]䤖wXɕ)q #'{o/Rn`/]MNMxSډXHjNԈE?l zug5 =h%XGgsi.C'Wy:YY,g1spۮ-&51Bs٢eXafKUmGf|CRa'TNZ*QѸhA8{seƨRSAT ?u܏9/$\_?~Kdz)mOG= _08}Fd_ٗiDeyD fL"ϓe<5hw"] m|,8}cr ^.֝4= ϋgśLD>7)O?yiH}@쟟ShĿt$szfr}v7~81[\?k*92vMrZ>g8wU^k ҨYlCN(@+yD_P;; hZN]/E T}VK\zeAΗI1LuD6VF!frO!X [yfB-絵ɦiMNJYNKIcCOBvWyLñ0dL>VY0~}ԷMD41$9}}[F)5cf D^C68Z_ WBڜ}s(*{)[3Whxgz;xJ/~1|e/w]rreHE̝&#:)6y%.:maBx᜖9r|cES@kHɞtIY"Rn׳tL8\w%Ů^oR#_W$?~bo-+'vLbpJ*}4 $+rш cëL1G~QP_4-9aWMYa;I\ dԆ>GwYxcYC/,"zWB>Um`8p PYN/˄ta@O2's6 jd4~0uu$85uh6JE0Rd0D4.BRLS}RO !YL*ڂvQ<+FօMM5{)tvZt{vل]኷QJqwTT )Gu4VXGDasHFS;֏Cu uJ]dZX<nsњm)L2A"Up.E}TzW+wEƋuE tUU#-]Q+ji:s᪛Un,.-TF``ą^[nT+`K]}o\Ol:ϴUʞduo7:  Qq 7zB.%A2Wtx8\*l -;'޾sԁ0RXM [\uݷbgP tӤ5xς~LJŇyKW,U\TqW~j_O3O<}AhztJ,n\ υ12M(%/:9h(>~:-~6j(e8 Ҕ4ʌ Ԟe30j˴<ܨ:e#сJF`V(6bJ9"ĺBĎ.M!kuJQǐY+"[Vu00Mz!&)B {eДOX;Jon9vf폣$~3?k>YΈ#Bk+HI'lR(8gIQa4*4 KJΈek.}( M&(A!~w}Edl8! X%4)JrPNByf;Q6Ѡ sX|\`d ̅BF<&)V[ J1OSȤ IalLklqʥ4394\y"s1 ._6g\YOɃý_b^..daɮ1O뺺~?Oӫ`@%C9F;>lC;ڛG?s_t( tA QZioO7 _`;2Q\"]u5NGׂ zwU,PB +*-9ayfsH=\9##0u)/lj:z|zvUqXƤz7ѨFR=xTQbdxFD8 mi4dZ-$` 2Q+U6jAUF%<2`2zPʻ@C*y+'- VLT6^JEp\a8 Ʊј"+3[Orl)X 4"D{ڙwDR Ns4qS# |ja4+mH:{L %O݈ö\mLr %Kv5(k{@2/}u]-GS<`Olh@<[BQ¢Y5RA[m`;%K^ƶj+m>M5p4O"JC?K31 r- /xo?b 9cQCfYF(psا]x&88gH LN` BY+0 W" O _X)4l ̬qrGdᚳU;6 ^Zak^DjwOHJpro}QGspA2vtdv51)e/\Wm}*Hel:lc] /Ho;K=X+^a7HNe Du[A9d]m&c%K'di *JKk TnM bzαg!-vQ70 +{[⏽pr Z ^,?oOǗxYj<ًD<ý_~#k`zs(s_F6nUg2ؔ3ɿa}y a?mh\/=(~|8=.l܃QS.@ٺc|diyU}D(nt5.ڕnі9zHzFRt1 yZ򡎧tpŗpJXO|mNC ~ϼ}33 3Y-'|  ߆+\AShtSzb۱Yp)Zɝ g7;1 QV'Rd@bAS$Xr3sÊ;y}|- G4+3*3rߴ7Fٍ́ 98CayDFj#CzB!sy{1aq CXCqnr0Ez Lp~HqHq3R0/\gXIͤgLXK¤D feR[ , pt-h^~_l!۞teXUKpYY}N҄8G_3s$z; C5}.5na}uNnh@ZBHBy*h׽9YreYsvQ.$IMhzʯJn4Oh~6 ߢ>2뷻J0bˁy`F4DJ+)8K,+\?kѮi"MAhc iDl%KB À#$hHsff ᚪ'ޢAf|Xk`+\mdc`h:3*CDHxk@@(B-4%8fr YZBiʹk]^E> {Jj7此=_fedKŘ$x:$3F YU15k`r ! qMXhH*=C.Jb7UW`3_ZU`leCǰUU}Jv hw# n>{S.l^HYQ KG۷tѬ) ^NkPI,[&˂>?I qr1w1v܆ 6B0HuG x0vԎ. +n醯l>G2-kUFWh" ^zWEx0!%cǨGZ,0atϨ ѬFc3йIWUB]=As^~*%ֆ ʿbyF!I8o$ߺQh8e8kZZfghx1 (_ "piM`1 j U5 `B]{)R84H1|Ow(Jŷ{c]؁ x)aBy"PF.1s#5+|֌ny^rSǺѯ|q5gYYMW*-#\Iǂ`KLg*ZP$먑D+ Nj\Z/W(N9ep[M=ȑ񚠘Ix^UBczXN&Dݭ|%x^VVw'j\ijI"NEphJBШJ{Y:@gƢ`WқkYޜ~38G0}-! PZm#U ci,vgqz5%یrǚ~k-t%4+ҔWڸE+e[?{WƭO@̐4OmТ^) &>ǒߡ,[+i嬴mDR\r8g85@)LuѫDjkp/VGSE f/;-!3Ga|sOd1@+R?4SO t&1Vl]}hs/<8Z-g~ b^ӁN8Q< V D{Eop U,wk[kkx",[kXVE51k"5 =fL] jm ʼDaԨۣ/ct7ݼxxܫ5d[p{=O~dOE C޽A o(KEDɩd*GNHd9>2oC4h4>0M)]v-X8(1K\\\R4MI M2݋!J 1dž"X^ZHEIVXVQNC`[(̹xDJEBZŻ-B jRunT,;!ٌXuI8&,-#7BfBQXdY30W+]bDnm%9اc2,Fr! % Ri$FguT0>{\w.VDOHM\'),C!3"JP3Dܡ~uNdxHRBOQg2c-GC]6ԛaM6Ѻ֋R G9,1ҔɗUXG5 S6ȓV$yS=ԃVS $K QVJ([]|䎊wJXOV hT%FLEou)l뤹#o4z,)Hzh.f.KΌkTT#!$hRգ|zҒ #ky琼 kqsVVڪY6V)ZmLqK~ Q$Ru{_+ LCdŐn*ٲ57ᳲƈ؇ Ԣl"+b8ZRh2fC0^%XERa%IQ8h,4|⎂rzq:yoZ'cN54PX3"P.:?T xt!xlFN.cXI!X錂)H@Qm||G`8Mj_|zp~5}L&m^_/oYrL}~:y#őHc ߼lSv0+mf"݇zpJ٩O4cBV̦x{>yZzSXY/3|[hTFD`/<>՘c-π0=WJKfI l\Z~PW˟:oXLo HEYp}<ëԸ.j\'X-q4FnGd_}ih] G}1gk1SC!]p4FӮK2OfEdz[p=9ҌܵÐiܿmoV 2Oo[sjs~ߧ,/%7[Et,`wx^M_X=䧲=x3?Do>)\2$:.DS@px-SO$,Qً⺁"YSq<>_]m:jE5fkb v;fd-w-(ѧ$]o?wD u!R3wB G@J)Z$W:'le]?tˁ(GtH0Uۀ!5ux#Rn?Mbò]<nט|Ipp:9=ߧ7}hrC3]dvhh!"! 1] uaA c%;W`A!nM=6Yw& -,)PJ՘d3^Q={s$yƺ\떷5p7>A$Vm(BMA>igotT"ܶ}k:Em] Ԫ4?fJS1UIӱ1^>.f v1gѬX_P<mMc ?}NY ;ѕXGJkb\ֱ-4a@ۂ%ЕDr G`cɔQEU/Ξ WB*TFUIoBPj#05c({JɨS)/:Eʨ|J>( IR#\6dͬ5+|379AVWbҮ̺ln ιi% ٛxq/b={eVG./7?W%^メe%iWpCLY2YJlb㍇ z8%JRko½Bx9+d% J)QPYO^3$5-ŚnJ2eۂV+;fy)ΪkrMUtjtGFEn5zၲOAtYM2h$;WTӋ\4ȼJgɩ(U6*TLlFt`P4,R 3(`UԗI2D+ DÛT4%E, W-RI$p"\ )-,E)Xjr l $唈Z?3rܹM˕&[5s:(NREe&zUyc6gI>RJ &IVIğ=&ȉ:J(lGLEoY|lD 2,9fcIr-K%ԁ 4l}NV:L8xG b%P 1z~)a: z ybH,Xu9͔ U.eQ$*� fkQ9ˍa( }t꾻+445@*M Pi*h<0c Zx++eW|w–)De.>,̍ pLZTihZ\?8iOyjh8Ā.4oKo {e5) &3i|Lr$x;۾>o x9uKV;a-.V#A^|#yvD?lR,Pw*:v`1{nAG]& $mmܑq=(AHabw@0qn9}8p6Xܺՠ/?1 ԓ' rk3 cG!zn)^F,>tl tH0Ffg}WSaXWc=?}7W .0ӻnuD"zHs`O:SѳD7kB ՚Ct"嶩BM}jԮ&D0oaӒ,qdzM(,U> (Vmwz"~'<6f9Ҹ;c c4%{ѷ %\|gvu~g'w%c>MMEiL-y i;a$gQVӑ7W?=eۏD Or'ց: EieDXt2JtVzO"F_ 򇀖wW; X#r6Dݻ ~0+FcfJڹs::VgmbؼrҲZ䎎]]*cnUE hf}(0O<u ŔCP(#oA2,$HO@ۆQDf2[(RCcac`v!ކ 2|wFoCP~Acl`Lh]!oU;s Y][o[I+AJJ䱱8蝷AP؉b}N_RmY:n9 HsX|dHT= :wM`"t3=cյMGtd+!nȔќ?E8DyLgLӵ"@GI,b"!B /_ɄNn)QY '-v;K8ثYRR߷uyϦs.fBwӖ[:.e{Y71rNˤ5z>6}Ǧ ow/X)O5|ΖO~ -oEyhe޻Z;jiT|wу- rbsZU֍.xGÓ/Oѡ2*x" 1Hχ?$|;Eݯ0Zve:MH$2?x:H90qLS)SlK<5V(w4zM5*^Lj]0 2[B|!Rt4Ce&Pj6#kbz׺2IcLiTsmFt-.LQ'XCm4h6kB`ʄnSj2ǀ /ǀ2Uf?ބ@c!spFc&PIϽP}LxRlWciz!;P0jeWE|QZQGkaujSeЌ x,Ab;IȮ5VR4Ք$wHr aY|9ͮShYfV ?\jlsR!!mðM#{fIds@4}1N TTu`w9{*評f9Β=ȵ]a/TH-T!wXuZ< amӔ/GbQ **:v^vOAҁc{Cltz-,ǁ"te|Ӏ4h[[mMNd\pP?)fnm2:i?SMOZwhٳfCJfтm,*<3o)9`,'AFkaOeYI_$Yi5jiI`tCVb؇.V^d~ʽ ` ZR4)/?1d*,%S>(j:RifyŁi 5?c1uۼ>D0l'oQvHPa99נ x%1d?(0UJ^[c'jwdx<^FufB䜶צvx=ntrGo9cC99:O$eRJ(F~;整p8t0{n)9LdCęm> l2T4d",-Nn^]M^O:(*pdيKЄr@j&7t3-r&ɮeǠakǣ&Z沆vG–l$a8-i!X%g7r3)b3wNeUMr)v`ɉIO|&]2sij8jNmYGB׆JT{o3zKgJ/P`m2;bU3t~;4)b %bb_Cf 5b׾HNuSH Bd^JVR`y o;( p0SVp.{R &p]aw!/!py6iϚAꇰWB>۠ w%I>NZyR9bS0cys=#) )_#JlF0 s5h+pY'sڪi]3fI㦗f#t޷$̵L}X3? kԲ1F+ȹĞlQl\נYV-2&咘Z0~-SŲQG)FqqAysTn~ta|P:~fh 攅"/*Z%wkMB*oLɮ'Jw 4"yJ@445zJgW:[vE\O-9 몞B3S8yi菚kJ.4AU6%i)sE}d=Tt]!GTmџH5i[`r*K4 Q 0u ~Z͠U|5}qhH_6ON^}ح 3D䤲t@xŁm$t|@[E真/%ws連L$)Be;Gso 7^t?!  Vy⻗ϲR Yٚ.#\F|;s',쵍S)A.H]|=[Axbp%^;LZM `uW|boξړjUaNPsקn[9un~"_m~ރ 3ѐ/ыy+`p3n'Z/vxWsT0ptK0C 7[ ~phف3A+q@灇H!9bv`C4,Nn@u=LcQۄs 8K/V#Vjta5Au3_YGOկ~`IGmQӎx.xME_gP9:WL߰L Hy<׀B?o$u$d;TфʍpooίW!vwNV >-Z`k Y+}`9ߣӯg~GT_W=E:+gQ)b'Ef>K[/b܏msى1b$!diݪc5/$V c5՜eVs1 8W˚c5"9#ԫ[3 }UY~79;Z亲M%FE]YsG+Qd&WlCmd{HRjRY DRJ |8`̩IvB7Qms1 f"yש}f]YtQ|n~lĈcEtXYyd$jemU0$jvʴTVCܘl )@d j^GKs%[}j g_HyN8١ ȱ4L_$ t6$5œ{RۑjX{7Bb)̽Sj](:+>7r)MvvKrKGސM$0TIqLXϐ]ud1/4 /-_lLMWt'yN^?G.Du Q-}y-.ǜ wwɅyF|=!˦Laןb%p8 N +D4fZ;Z[,&7'PpKObBw &qc[~WpP0[L:^slڷQЂ\jL<^s I_8W1kNacA ͳu~.>}e'A ߰B7"7. Η;pef>"<wgy[-c&݋Js|y bd8z{ 0'Ć=PDj dv-Rx ZՑ;9P{ϑ5zQ%8Ç?^no &@)MѪMsR^Airnx!1*hDk6@buFS)=x1J(&]){_y{gH^LN /RaS y T7 %0H- 8Vt_zcc|^|hp-g-LƧ %#W\S=QFR3[Pjܼ:( CГ+ l&߂Y!īr5Lx?֍PDZGČ98)aԻHI;V$+Yas"HjQ =̓/O(F>ٍF&oĩ_q{vfbZ48j,VsM!B3%UZb5&<5CQmrqLyH}^F4VX\l@΃GNQducg0@i o)!*MV{ɍU&&,J48RKcNUT~7G#R;yNt c7Nnp‰uq]nz2UOSvj^Յ&(c1_ΕG O9P[?4k4wBa(H{D4٦4:LZHٗG" f? t:<(5Qu&Md{ӹ!ښ CXk庎^bZ'wO2x՟:B5]5|LPHoG8BQew(vn-Ÿt<[x%%@v-D<ܖqITs >R??g(j׿\лq6OYԘއ"+&oPzңbY Ŝ{8u!pn֠u ><{o17P!ݒ@I."S7ϼ# 0A %\ $o @B\<85 (9!>]($]D&̴F"z89Kq殫ڝ@8?^N\q +TCW)xžafO٣bٕ[ة':H X^g 63d0L֜HsoT 83T<זZ6A0XVј;Dl;C [!wW[l_Q)QOV)LT +D+|c*SRc +q:O.R!M#S*%=FŒ1Ro"¼b`gԜw-.TLYEIkϖI){9ZF P}Ğ(E`]gɨuGJmeMxxR,=b¯ݍ ^RE`ٹeg ;2s(o/fd*v/su5 *So]35k^]8._t|U|Pl(pB5Zzs4:玻nߏ"&rpϿmx>:u_x@5Ϧ~#Th B޳sFR}@,oY]uH * ,Xwd$F`Əh5;WP/xAV$}o e\x+_p JV/wrgS_\))$DxuBg "׀uWl7d%Fs[9%t~3 DDurA AfA!8ҿ>5TtyNm*K_omXLY@xfM.˾#qc<h~4>b`!OU4j_~{;Y(mݎ>Mz^T5~tjttwdžX7zs,"s߷L,ٌO%*{ c}PI E*o&F+b*~%fe*da} в敇`fB>ٙ54 eBEjUTo?$YԖ$C0Rt4%xܣSPl&2YP.t[Uf+:Ҽ}^lm5/?,ćfA_Ub:*+pRcW Ӭ|ˬ;^)+gp3`Ek)F戗0Q҂XzJ393OZkAMsI>5D&PTْb[RlI?|T1Ps~VLRi7+wo{m٦9m<6[)\-9:^'~Ŝ-8ֿ%81 saS/dܿA1=؟/30:Esu ݪ~'͆2o Of%;~dǘ/ۿ1ʡia蒱%;dǽ<8-D sٶjy0xɘ"o-ߢX~thglLID:tsA ݯ;-T{vܠgZ8fL#͇()l'WBUc c'aaS~yԩy))*sJf$B*!Ghw(K\TyJ3Fc0}Ѝ  ځ8Q%3G+rl ,0#ЮҾ1uvg@1HBW<1jOS&ql*NSDŽ=c3-&f%82F"o\xӹC;MPx4y:QŦHMG:VEv KH;B+J&l̑b" h}SńiBm좢tSysdÜ" ӇR"&dOa=2sj !|QJ GGh]}TƨݹC;#I)#6Oi1.B0DHLȞ qPݒa,8b1J9Rs#[c߄ڝ9# #|Q(dWf*eX.qSݼۼw-qa&C0lLRmhwXzS jܴ>U^͛^t&v0ڻ,)X}͵罵./lw#O8JW MKW+N9f#Do#"D?ObT)$HmzDOX>1_9Jz0R9RNVjR N"0k8FÕVD&cBG-wZ/cgTjA"0rI#}rHԦ4 Fi|F[Hf(NiIu_H I) ;Vw Ĕzkww<-.zٛ|sz+5(Kkm8M KdRzeW yve3%) L-zID`zL󮞈ּ pTqAZ{u&Wb&qg9nE$0"RLrTN~,TB5kC u8v *j-8ꌡjvD $bAtNF391byE?޽A"4Kz?ZG@JK`qETVf{xv)\]*=(^l?Ly%1l6聺?upAV~LZ"?d݇] a':PjB>(0f  RWԑk%觌=bx} qQrR'"R?őhn8p88y l:žK\ $= ^9LJvE"%ߘWb$*A)A(1|÷0%_p$DL4gj>dFZ{Ǽ bwT><xչu' 01< I| HR&˚Uz֗S;U5 -M֌Qղּ S UɊvEDK3N&Uo8vw*T1ih'W{B[V&'_ڲ< ϔiܮFz Zr @$IV%#9 Jlc$-^9| L?u cwb2d+`$ZauwG>5P Tȁi 1|nK-Bg٧?gB1A a!hgŠLKּ㩄{|MՋ+Dţd>u;1rrZ3f.vvHj g֓튈\q- QP11At&Z xb^*?Mra`oj =ju_{^Mk&nv&pFIxÔx{'\`5}=_&m+n.thD8 y;h=A )~3enE {AֱљCνVД`[:JK\ )/.ߊj3ݙ , ݂3I_8^#hELjֲ'TZp~/sr,e y2 ^#.A5rPrJY%ne8țs7J:m1NpIIz6xȾ+hh&@}mԎ,> #)KRTg[1o?< Dcʬe2,c'SH (M9a~dZ+mLeԛ~;t5bZs~iϕ;qzt0?KĄjPBkh>Z7_x ?*_VNXIgc@skESnh* MźXTl; ,yiGW"fkˍܣ PjlPFm2fd?7>]J]<Ԧ/-5̗C!r8Z>hMwW/Y5~/حWH:R"Y:zxv6 ̒>X_! ntY2eR, &¸ ͬN5[XmEFӗJ*՚3ȏq?7AЗˮ;ɰ&d3ɴ g¹4a}n4lwd xG/.پ:Ț1\>o5}K;8+:MwPk*KlM|M6x;%Џ}6Tyi}"MG:繧 R&,) sJlfϟ ;s[xO"|U9_J%u&YTLF|%p/F䡾1EO#r["E&%g%7 bAEgnHt7aLn9ƱS~azɬ7v|O!\<ߴGo\~,**[|xc($RjĜ`XfbgspB[~LH.7ɲ#&koa@™ A>"f# ppLwV͡-],/TV]6k`%EtCOrOS x" 4;Pb8[Qw2>wjϒ+00-;'* M3őAQ"3x)1xdl/N[وԆXR<9WCm*`@n63BplӤ"XjnllFjT>Zx2LJ:=|fcog5cXVZ΍^45גrfZFxxjʩg0L*yL~8~Ixi66f5WoւCk}eIWqfw'bi {7 x`?w`30ivXW=/kp{>rZ p6@JL|z_[ܴ8kZ7OEլn=B8_s=9 _6 pels0tTh#BvQS3xV%w>P!H3h+2KLlșfZ3,!b$Y{Jw`nXLt;ZhpB3mJoخ%58a %iZǣKgA NO3^ 3uj+HL۹\ %8uyIB>ph"RcJI%J9 Bvڸ{"ASkr˚^.SkihybyFGW[nrX>cIyy_FMCIvzҮ^۔E:kiW=sFm$70|0z-.O||ճK<~SŃٺGm@'lf' PQ1ECk^Vt{UL݈z!RqTr@HJzm$nW茆Ѭ[p ^ÏS*8G[g6w7b.eB͂~$IA/ޢ> m mwD2m]+G70}"IQ,^RVeYxl2YEddDdDd\fMH<3Nί.l@4Cbʢ2z@hLq:)!m^^h}͢?d-FvjiY_hfkb9]>Bp cng)mo.I]>ߜ\|=nԔq^[yY}Z@<#=˂.ɠ+4KCZF ٜ0TTG7˗,6y,ŠSIQtcC+ެ i 6%OLn~U򿧝~W:YE8Ơ0lNR*Y&%%r[^;t̂"-, !H1DYb5ɺ{~VN|hYHZݑ31ϴ{ڈk&dϙv/;t%YF> pKLW{N[v]0}˕yص~M@ rD+x7!O s4S^n#;d } #XIZۀ{>޷J&t,kOX~M#" L Lg\JIX9㹥:l1ܙCΖ ެka$]0TlL"Vܹj1NƸjse68D a H D.DBW1Rr+N? Q:җfaI'W->%>]&Sď nRP@GPj9r3u`KѫPZH|x)4g_$WGKTya+ B4j;6<3^nU)~hY,z, #ɠsز?N=L"Zb4gi49=~v@a~g7 Ŵ'R+Q,:3F) vҘ#㚼e2W7Ǖ`z[jIS{t1h 2;f^Sf_cgr{}=>r+D>N+k>} %@1J:Sj)P tyLd!9 \(/TB l>TܞwO[^v|ZѢmT/{DH&rSǯ{:x&ʳQ?/{r67T^/xZ3yѹص)e堢(^A`x5b7uoږxwBVzşRhBT@0 %ZA'*1-:x +O\0EI6^"fzqh1l @8t'RӫbCǚpǥAVh0T8DSJ[-lD̝(t"E`R(\,b0H#!ܪ--+AO:L0kECչr5չ˄qJqJ:չ̉ A^PNVo?q uSkB]zkm'-xNA u.wFRpjNbY[0m+ _Tֶ唀#PszSKl2 ܊#ʲ0, C6bV% ;Om[ u=s(Ԟ[1&prﺮ")F K]" 9i+cs!1bPD: pG#\K >Ƞ`$L+EPJKĺ)eo*s*_7]e;'MhPЎz͛Xv|iBZ?MQvvQhĜdT\YK5>P9!<PN8@8Mhݗ?d~P ?(ʲeg^jOǚJ(aKKG!K\38aSw#CqM.NŨBcF`N , q&F@ ~B$Y?A v)`ji22BCXTa4!lBj tqEqq.)8n#%=vزuF#"`Vt(DiŰDjhXuVJipNS1&{y7ڬMҙj,|e jqnv̕r\kqc4'.qOGgWEהNcm^Z1YG4=&l`0p A(2"bR eKY$<(Utj@LRWH9]1nHe2G/m휧M w&﯑͙GkB_qҐZJ9AQ)Cz _\L_·wЫ\?EC" #Mpkns3Pay aXGc(:]dr0n3~1ad4XoiVw'1\*k֥S|NsIu]_\]y6R?ʀ` UHS^ wad-̘=h0]E2]6ò~ fSW$(tcԘP?P8b)3Nq=SAa8LJ"P)Vh Gꈽ!|]Q*.H"+J_TTS La#*fn+_6{u&i+!uY[̺ -p hjT` E)FiD+1  v~+tِ(5FZuWg'9zOs5 79?l3:#$L4 |[\oj3)A&0&0k=T hoaCyh\LHS$鵪fz}=8{<7x̝aWk3/w|Yzuqm0ȣZrPUwļ:y:TL rFXCN\ss/Ri05-j{߯`c*^GpAHe 4()*cDu <<]]o}˖%/{ef+#͵qn/sΘ#59jo\5,JIq]Q" @ԡ\!֬e@2'SMJ1H4Ia$On&f,d8u3_Ї̲ίaUͼGbuf7kOÅTe~4 D %xKq͐+!N!=bҏC޴OmNZo=ܛ5ɽɋEod >`M3юZ=ĥ>NL ,# 3&FzVW2c݌8 :nl_ x~?>0#qb <(ұ8ӛ FHG֚M.ʂ],dd~޵ `No '{ =B"F*楫tdb!^-yT`U bi4r".=Ta0t^tA9)jωd$izkg"Ξ/*JF*/%)+z1P@*+f1T c-f-iNJJܙRQ\sƞ:? 9_y*3IJM߫fksnx;m2Rŭ'P`X HA(޶߃%X݆tv?+%(f"IBJgw-<=x ưazw߿vy,tFnl燒LvV:)wȽ[C#xm0S,`''2ϙ!IyycM11 !j޵?ӛ̍bBn#Gk_R߅yOɜj\L@G)O^ .Y ^o0%IY9;Zm:Gd:Wyf= G#xnLv'vvS{$Ȝd * -ե6CQ%B|׊NG\?f!-,d-ѽ ZIV;t32\-7~N'WIưnӬu3,eAfvYĒnXr,Zv0qtJd*9U- sX0ukDn*&XW#!u#N{ՁMA`f#D(^ԃ}܀ G-Y2GDDG4Xk <&HģuF mT9*Ck&V\3LymϘ:g@ޯ7YZ&)ȡhՇY~LEPy2.[)u ״Ŕ F̛sJ>z?^`yշoDrACXg<4[9̋Ks7jФަ * {E#%ۏ dxRB,Vw2 Jf)jwڀk:DP9!+DFT$pS1DyaJ -)ш9yKQ;Cq;ZY<ܽo>'6 t*~M|){,yKl,Onlfχ{ D.3l)-0\+-;˻+-pꇊ )~!jxQ5?ZFkk򞆬= 91nk`!&-v\Eעr3t<5{a`Vv^Ǩe {wFxG/<F{.*l@=DdrUobo߻޵m,EEc~rmNrу&(ri%HΒzPd"))"37˙F;BKOhrW;Ғ.{yәL[ݨ(ID%љ2V!z;7i뻱DVxGNBNuwөb%57okYjG5oVw~l#|k j"кj6|k,Sb۴y:Z,[^A=h]uS9ϭ*Qp$VPZ;9*~9wr V%d^|ڕ <:XJT>}Ci?lXI. 3 Æ&DB5CԬՃ@jVv ŔQPT&9a$/ mR^yVէ/ #)Y53,ʼ"5sꆣ)aCqk/߭/6l%半eaNnx<>n0+>NCCy5M t( Xx%˛-as$iS֖Rm/2r06'*ԧ֩#x2pg']i'rǭ ]F=KEiN-/Lbw]|0n¸jiMnRJPcY؝^Fqa+[H}D㏼/& *ʼ2R0Pjb0깃5< OX`nrqDž| V=r}]=ǃֆLͅ@DU({vOIX O+wU]"j3R2NSgމہ[oT^E[:)]\ת^\z ɻ_Mx&Oz`tq̱:MoFw}>qz۟~_]~Pd܇ O ś /GJ5>>aJg4|;4f~{:`t3y乃T߿֕p4'A^M)nl@hot5vAjfs.D&7p I]2aA,ww0ޗ0qS+N_Z70{OYf8coN%M@:Ô388ҟ x1誉(bo&j&]Ŧa@yct4#BPL|hUpe&ZC![NGc) CL?ГP lc lc optR8=Qét]ld5or1yԶ0%;` BpE{-)u<||-\>hu'g,㡐B 6QyPT>&_ʛ<Е _7cy&pOS8Tp?`4Yʀw=~q檬>jh=7wiQ9 X<|P@d|iydAўԁ'6ZI ^@֜ 8B+܁'W_xIɴ˘$ 8\I$ õ^VxHUelSX`H|ۺgZ~\j^Ɔ'kj[ Ψ{|\ڨS$PT1u*( #T7`+˃&È1H#NO0PGh|G4򐁁.4P\=΀`cSF~SBFQbh_Ud&(exdfz;xA Yʃ»,65,I>ı3:XH=u+ 6bBy;e{HePhqgo4L| U[MQ0gҳC)d: QNobDۭ_? _'&9;-ϿgBq;vVo,1Aҙg͟e5Mѩ*VTHmJMNc/[1զ9u]^?8/n{L06Lq#|˪\玽AH8̪⾇lYm֜Kq*сls9}`E.gTR#>D2&U&ɫ#i(Wτ@lwnAra|D0½0α|aiZ F7V4xUށr_ K_)b9S5~i?;N`r7H>jw&вZ"s,v pPOqʷ 2)W8 HCRE@`H*{0@;n{P }hΰԵCϩKQLkU,pnDn}xѹVFf֎0M[|\^B֍;Q\j0K0%vXὌBn0eGVMZj`]ߺ 9 fjd]cYZ0)k֑kL=Sp|h c0C&[I2*VI'-UZ]SJz:icd kU@[NO֪7ٽZgj_#GQ<\}JPcrHfp睉?/4L-lҪjV*H`%+ .kJhUf,<'p b1˾H iD-O2ԕLgn)pTF4I11~z',`| &[N3M]&Zʦ2 yDu6/+ 7 {(BݜZ )"}`9PX>9+Va!?noSl,l.㤂kF\'bL_Os 4KξM={vH1t2S;dQQt0Lvw"u?UF@ځ^}T璡"[44 Oy*3ZSF㉭nv;)잻v&F?e;P Ku_-eX+w[B8S/KHusbf.9{[—SCH 5Q= |)^ zvI"ӱ?ƶ`05X 02`cCZH, Τ0j[NDé)L6UP<9bce]U(!PykBJ& 1IJI(Q p + %2Z\sOɹВmo_| &C c/GQ@yykHB_nV/!Η;'r@ 8} ~R)JC9;;K.4C=ݿWWWir K{Zw8Vn܍|8dtkFLXѴ2?s $n .Su78yJ>[xZ.ʭ4Y>L8 취 k"z"v>^OJZE`V0_8*ͤ9%qp9}p&L퇆)Cox|')Kњ3gFrZϧdQ\rz7_YmQX_.+0>h]4c7oo $Uޮf03Z|ɼ#ؿA3ƧdHfic}d\>:HͻݑYq;f7rU$NGt{JU_RJ2'{Xe=^,t˯]eˠA/c*bR ,XFfBD/| G,-P(0ԜcBPVbPUv+'dVAry=Y\B%Y Β#$һ4Yre- Z) ˑ} u`vv$ `tN8@.Kc°)R_ezSs*BG:$N =n~R,OSMN3ūA92VD  AO\#(9~=-$?pMWf&b<|9ek[7]*&׶*%|teZ &DFv TI|jPB.5IIfŃ:j%p!Jp20,I˥Tf^`Lݛw]ŰZW+@^z~qQTR)4RnEy{Я/gW$Ks% <Ю;2{00Kn9ybD-%6EGA ԌG{ƉkVĞ,& J=vEaiPs (<00h6Dykc̦=h k9r:4J Rt)A^HUB!Ff#K6-&|\N80$=LyV-`N3 -mK&d8 *S4:uດzr-S*"(?=b!R3Hy.OOI  X8E}w2Fn%wO^4ơޤ[Š@FҠ]a[?-~y˗.n/?X?/VcyowqKON34KGfP!, )5gT;`1x!Xݞ+]B-r޺h t:j_)A5|8d ء#,R'oWQvBs AG:nG,D%{5KfJ.ՎxV*(q@$)y Ն*[Bme*A 2G*b;4&4Gdk}Yj~nY:w戇._7tldgr^_]~$i3u?s󧫯].= ڵ>"ǭOsI45k`ܜVQӷW {rh%AL o08lL`,^1*J x׌[S`x$|qW}}D>S>;bQBch}# SrGsi6߃@N'[Gk{0jL> 3J? J$?!S&gPx}î)ޛhCuGreg񯫋˄q̫,/++آm29adžߗ=ey0\c.a2 LyM*IXF%.b{-?-ԞbͻS_n@VUڷ`M͇Կx'GzQɣb Rjiwg.C3+JGFiH :jrM{f( DBΖ 70XErL ao [FttVįFm7 Jp@î֨E}vysZ+ss&z=@_%:'ιq.b7-9O{^Qf<^?cOM~8)uź;ɘ,6H_.@Y^fd;eB?O2vn|>qn E˝d Qx* Fj6OOp ʇpKWjb8ѥ@:~—ZRKD~sr \hj\ v` ){%nd/d,$ʭI{G^IZoȴեt\+!G(ۺN7F+Ibƫ??~a'FX$ ddT!IH! %E&[%Uc.[#!)4°Uo/ovY ~l~ MTF+2X!ICf': :2ѧ"K4FKv$i g II@ 3p DdOCZAy$ޕ(R鏸DI 4GXrIMT?RkEl].gd yZ`Dk?hqN0S&ہ1I +C^_T6vF'{pUl!,v; 0L??E ̮T)Fod=0Y*M6] ~X':<( QK ,[@82H+HߠìK2s4cb7-eʊRocv Էv*#)JrMT\WڦrƠ1|[aKVxU pɤtgS{lX;B*\/GS~?;߈ 1%h0 fXXmleU9A,qS2e)b,Ca& ʣ"fAB5 K@>z1Y#0)tL9CEFm8S^N°S(gVzyk\`̦`L . Q26l5'I7#7څ@ O&NA̢\X1X#x,9IJ {;Hɀ"cRocoSloRitD7̵&wy*Xլ8fN9i Jj&0> 5̛bp+(y n*[pwxǻ:u d}M0qHҜJ(Hѥݓ>IX$DadhxX)*oncP$I"Gmd+2̋hY(L H7|e0 [UMz i%؋8+%U#O u:dkJX']Y9!Vr%{ɎPñ;Phߑ c32ԈSKi C8pYזW(K=Tm4:vGt,9B#"q+)J7S~*wN#X^D@2mi`)8J9ڲYLu殭Y[mU,[RQW=^\ kן*4؟ DݞgEGk3E/*:s|MpUv\>a35z{|;8hc5mٽ14`S\s{0 ߾5E?i[f=eP9h]5ŋy I0:r귻iu:soc]=v_Pwݻ?]w4V_@$փG w2,NdõmVHeE}h"Nvi㡣Skl FN:^맏NiH~ EAÎLji%*@`O{\-11(@<\O;qʐKO&QʽHIj)E"pD"ݴnFy]o7Wp" )=.ۗ?c9VY꙱Ԛ i&HnbUWJAv|5:-1#.5z.`qAuBlmW\nxd/I 3)5(3|phw}L%V鴕ki{]mV1mWZkc9CyRCY2qwKH.G/› B\nB(v:~Yi_ױ9t?+31d^J/[|RJt\q!`ҳDiҹO"'R61pR||\Jf L&h)R9CdFkل2I$\A6Ax~xBs8Zq`_V@H!ONtXˑE!0BkA 8ԇ i#j,ܤ fTf<)/tƸ8֬ POuz+{w)n^s/h bPO"!p VJ uꁷplZd0GdT]_r< '['Ǔ%)(2J@3[zpWuy<&ǚPU{a`PF͢AULPt|'R 2 ibdl?OD;:NDyW`hJQ)s]Ү iۘ&yvE;!䉐rus~ߝW}UvwgwåuEcLG|Y)S/B&_ޜwm|҆)ؙ^#t3H!~IWX'q e#*30+gb6F @pFϴku|5Q^|uȀJy”0JI#IA%otq:s dW:Oɕ̆/3=,9qХp8 tVCJ4)_~؛wFNS: CM= rD@r[=62*IzRh4 lO<%m`` 6(\-0^֥zresW9ɯEZ)K V9i"gsZ6Z,ОI-,hs:<6gRurLU&`GjQV``]oL]]Yeud-"9klZX9bLK[;uVW}PMrOןĦ_?uկtzWsc +P ! ?oiZFifeQ%uxŒBe^v#ZE/94:.BVǗѪ{…zn7nF`<4x9 <, ^0#բaVIvbN+HB^Sޭ*obE2ܽ$ II aorc>*1I٢Z4sfdXΗ#̲s!l'Du{,nZAXvJ`d^uRP\yf]i=q6@VS^<(@ג*2]zys]v6*RrtԘu׎_pH]C3y*xw 46 (ciiZu.ѓՑ1ov->Xn&JK/k;JKӯwX˛2f}}l(&cs/.O!V20u;ڻm-|ۯc:~G ztTHt%'u]nl]JDm3i!uY^<ต8O?VT ^rrg?XlW.[O4ozVGo9SvY'el;s9,(J??a^YC=;@?o8Ze mn}^E xq}~V|ׅuaw]X]?@Zy "fnt$΁Wƃq+D2(1՟WoonѰg@o^- NݤO7>KZ[}ڽ:zSBD$AGJ%dCA_rIAY}^ g2Qz@:ffY^~pwM(X0$c Ѷ3Ĵ(#S; T ]O9 #9JB }Lj;g]oV1Yy2eJDiY=3Zh;hH '(I;S TKɧc:kj}3ALkˁ]bu=o F#Ԡ`H;,* V)nI!YW{ﲶ(Y")] /h|BJk[l4kjc1fEnv E::vEV&w.uZ3y}VсuHDI+-ɳa HB:rݝAF9!>d n;LTȘeaߧ0QKp |ԣTf8ZS%D+񱸄wN=}0vvC){L藠*NJ4xI̢"l/⤈*NZlej79o5%l>2ƂY`|:]-/)·ꢇ+V'+E>ga2*(*f_c|Rp\~5~e)@ I5z̬QxꙡwC$+c &bO\3Yg:#HGl!}69@YQuOòaPwΤaT+f`#\;6fBȔ M"vRi'2cɸ)N' F2|V2#TsnФLpݽOͰj~^=&zDV?gx`</Š#1]IWZe]q{k:^yXPK8241 c4A2Ta}4M`k J-}ȶfVyQM\h& ϣR9Zj30z7(VߒOֈ]Tu$ohCb11%<\Q؜b*ZtYGm{0uAp[sP kZىQ4r+^iߢ| zOELѿ*!!,˘S tjAۼ 菵hE!xM4*qJ )ܤzkRR̻no+SwEsg_,y,N`N1-}o5XX>&.Mn@C-Yl/7_~ȖcE*3BxuTZ&[vZڛYOS7ksp;y<ԒeSWzoܺ꽮N,#c.Z*pDVeܦd'tOdh6voE82ƢY>aW닼CK9Le;r9IhʌE?yl ~Rނ=| dgOP}R%<.CxOɗfԜE8}+ Rh5y bߧXF5rオ9 ph(>=q-)ixdiigDˑ c*zx>iAiDF\` `^%x9q:$fDC*e'd ֨-ˆG"轳C=)#*M{K4QcMo/V{ Z旴f>bӳ 0cITEݝŻW.E/_޿-y[Vw޼EKǖcPؐ|dAfWaP/~?Uؾ(fb-\FjD~KafW#QXo=w=sd[koFV`TQv>N2R6|*ڢSF޳n\ʬhZyP:cXݸ'^K橭[DKhN!_|nvߺa yP 8FUI׍s-n_0|*ڤSf N3`pa|+~e&$/s!J}-9SE$BZHfW9ߩw;&|:9]>t>:yPIurx:9|"zLqaxc1'& QIYAERn͘Hr oTeERnpoAR> k![2KɀL)rKH>x:)!HaRq8)Jj'ebA&eI_`Fn ocm=s\ӆ X8N\ZRHiq79 rضX\~סi[C8۳H9r pOG"w+ԭ-J!"2ݮ|.#:N׼]];tmyig*U%F\n/Di{HԳ0H"x ,(Q * ) %HIStݝ(%R"@D*4H)$9+*Y4HT;Dr(CA%1v J * -nJL+W|1VTbTj\H^قI#p nEf)-a"P&5Q @űb~!JYR*`I0V{fukk؋nXݡ:n]'f.T(K}8ijmLܑL|ZcMט ]DKی._mTNܞMvP_ϝ2%Ƹcf-$[s^UU72ǨYNҕ8w[Ud~̌CFϷ0tV/2UrABٚQypR掯SnK R۞R02eې'2 huoRSPmTψSԚ}@B1Mo׾},Q?8*vHtv/#REBrF2/uHKh2Fx/D ru;x#S1cJ%uT(+RIݤv74Fr~Z4qX`NO9&?W7)D,IٔLE5}9 L0p:N.ȇdm37Հy3AuΎe S&rJo$BP ʚzIf=sDpg$)R .H RC2&0*QTʞRWϗvU搃1nY 3y V @6@9FBΤ z#P"PepURP5% @74zrjZUp.+5'9 -dJp0`JpMIP^$L9aWBT4DS%m S:n{ՌQAbsG:v|5=HcH7.A2ը[K:xpA."@9!\TOQ_5C13 J|a%6A|qr1FW{ !t[բb=KOB𬕜rwoٸr/-`ق-*%Uعk_4I6V^؛Zl_>8 YVۛ`D쁥q+,݊Sl_-?II7Z(=4*EDUuA%Xaá&Hd (ى؅!8XUᐠ'a$VHb BnTJԽ"V%^2 qatau}jRʑSqUX3$@u@c1Q^`alH >n`1cBB }TH^**c n&Lm%M3.zIS1F-^h_]#Gc5gwnZ+yaL&/.N@{կi^G<4}Z8"7㻺w 2ط)> 粭!4Gԇ"JSuWKۯv9>}^wWc!Q.NuIH0BhwvQϭŗ^GPe'!P"&, ɔMhnu@ #)(3i)FH۔8|LY5pˊ58`Yd0Y @Ei;`au2Y`dži"$4 @rQ!ᑙu"EeΔO`T&k֥mQHWqQ6%7}M~"_j\Bw{HksYuS{`V3?'tu=zX:յmp|R^^xe{ESٮ]A%+bE)Նr!e%-9qͪR@4`iJ+8)o5&;nQe:l.LKJ_Vs<1o=f6-6z -O&› o.hSHYK`%jJm$RZChe{(#U#YwI(HC'ߒQ)s-~Ъ⼽a1[.{˵~pfkC-7'g\>"өz!V Kw!x&Pbtn;&|[cOCo=4\!)@HMj,N(N qqr1T" !wz+dMR6y24IN<9I"pR_Idf&D]X̱mDZIL>`*AE%eSb1BD`ƒXLbP!PT.z|ۇ0)DXsxM.a$)Ijlc>yX\ Ӆn:O˭a4冩i7^[0ϟCsݗ7 e#VTy0oO\᢮xp}7'SXf߹.W?wݚ??޺CD9>:z?X tQA|}'6SB*$BoBȗ[:Z/$3AM]ozf1n˗(_TL$T"Wt\8mU,pRbh!VRe Sg@جf&'g'Iէս)VΨC6O17 j~m $| ?b9%&)CD׊Z=VO(3[tPQq.ǹ;lR@ζ|)BJ (ydg|a$p0 zb/ M]DŽV<V ϪD|(ǡ(d fxX}z}w9QI۽Ӽ\hLWrɆwzQeY}mdGR) eBC95ֿ%$ZB@h c- *4aqȬ˜{t`W|aN qь밤X9_l['?Z8FqR{P#ͨ#ܜXpXgٖ9bsjͣS+4UdS+ AdGuϔNJ.hBkg=:@5yzcT9_eW͹OLsUnu_}RGG%*ҭC,I5\G )Pfܔ9Op<4Av a)4#S=xv 8# ޛxT$'O6qM@0|5'>/<=SlQJdNp:a 3fpEaZ}:O>~:WQ>6Î L)a!qhzp!PLx4yX! (Lt j )7LX6ƌ9|ɣFlS|OJ CqBdXPc}ަHaŸkvوru_9L Aߕo:"]ZRYPIP+*@XH%Sr#Hqnu@A7RpʡC9$T mA@TR$J*! E'wDEtuҶ hL4%w[Vg~\zռݫC2"!RV%zqJrn# &l (L9 }m|jʖg,rWΝX5n^+'f+ݏ?.pi­K;xn _=vGrvi{lxb=+NvfbF6NwA>}Q>Raȹ4'#|7ٙ}ك$@p-%[n E]8ȱŗ`fKy׻a8D ȆN߷W9$_ri؍(΢JR1.P-QѱF Q{M]l~;ک@o/'+gio@?wepO;Lނgkz AϾ7v((\ fR13A>ͣ {'#pk7}}R"%Rr&TG,rXZs+g 06sju9?zU\MfA+߀L` öb 7cl̳=&Q#yb>M/]4*l><(_Ӂ W? ihI~Rɂ ތ:m x˜m1gΕrs:D ?|)4|q9^żjٌ2nU_?^{;W4(a2 c߁ɝu ?s&6-鑇b5x٫qb_ש|D4|D'a5>@?|$}flVEӫeo٪V*=f$Ur(8E)u!G,`Tb HfWL X6&!tиKSD@ DٚYk+9j@XqJRTJA` X*(A҆̀s0Id.VmSh"`2ȏRdu0<["H~okVvYOd6XIwm2`Jhc< W~HQ½fZ82x2>,@WreRrI4ۿJMAdK.O:@']p?n1=(:?})45&n-#+2hp=i}:kRy~4;GĢzfU4Cۮԛ|*{.dv8lܓw\٩8λatNFSgl]e&=,#D0c `?W=ˣ8e1&HIosd M(;Żp3af!͋3`nHNX@ڷRa8ce&1!%S1)SB‡RIDIScS`ulH(%ccbYVR.Uv/k)P՛5sۏ79 GxƒsDM["Z@x<ʜbuR }0[Cݵ!V[1|Ǩt CR&AՏ!Dߍ5n` [j682^ ڌx!N./^~bMZth V^ XW{ou緃Qqy/:~$s3UV52̚-xUas#kGst}_~ bowvO';Xc4N^\Gy`+oK#,G­{@Q;< 4XՈ 5kM; AXpNol2',pŵ%V%+tSm5$9r}n9f9S ~Lw,"}aqWD|:![ cqg%'7QLN_p4_$2/9,ZhlZyIX8 NU[%}!?n {O6OC ?Q]]ScEYcaA q )B)sQY@sJ6O1HA ͢B&q|0@e#h<+::wٲ6[k/ .H2㈟bixoR2)AcCiH[XЈƒYNb$&:NkI)5B_xK굃6s@-OSdz . :Wu!"~u>"(pV} 8B N0b)9:, r8GtX9jA<pZPC f@B1')4C4 55A,\D0K-ǒ55^o@-F= %D92ﭟGۂ[80N'h\b;/;g6݌äXp.߃iL7? . XÓKfOUU|98~:t+of:_\v~+'],cf$䙋hL)>ذno9u A }Gvxʼත[0U[E4H89dúI9XPTwԱn3O[0U[E4H49M1=J*:tQͺ#Zպ5!!\DCd/+>5 릐{rBJ*:t֎ޮd-Ѫ֭ y"$Ss ٴnn4$:eN7uheրI m= ;Zܑ]x FyMʫW+>:WUqOOgKK=J#M/B(Ux_P~BU!_B! ةz*JμhAKWqd>nOIfn26p-.<*} o]Pk! 7|/_É8^3'& #`16b$~)Sn,p+@e.Ig<|v*卝F-L6E{w@*҂S)>s =AKYB˒G6h+R8[03~9frs9foo 敔ˋc(lUNO5a82se9{U'C ?^v$WdZrb{er:w @@|ptʹp%K)l"iw1q؆C npPƬ8W|l}!z)FY[RK%Iqw}pѹYGm ~FWNtva4k¹u$ α .~sG].*qFG<&+)_6rbH&&#! IRpZBbD1VDä7P\k![rx,Jh*űݜOJqa,"DQA?Q^fRy7w<Rvh]]3OT~ v$cJY!(1$$ȸ]N,n?byag+7Ȧ|yF/Ys5zffЮ~1, hXg\.{ Y-^Fn'F 7yZpJ@ Dch5ClK j|¨ cެ?&{FΓ#PmHPmm4%&,G12&Jha"Z?ZjNIn*rDAIE"I $w2K ( #{bKKJUjO@$4A6՘)FJ" BTcH>IpJq*cVpc 3T2+)сi/VJ#iijF gʀXsM(DFɇ{JFWވ'c(I0fLD:J! ~H, 5*f$IY}1gd|nf"xš5cY<5O\ tnBj.>rp |Tl~ty"w8-)@xbS3[8fV 8? /&3up`!~._I'*X]gNq0q CLx-Bb a]DeƢNQSXcDSjLk? T{ ehNedR\V}:[䄜{Y!DY)5 E y1ݬv=YUN 0?wMzCY 3eR1!P޹ॆఊy|@3煈Bտ}ƽWQ:9uyQm# lQ=5QLgWY$V7  ֣ #@ U~lbn&cNMz۽ip$psZW6Z@giT#=wňW(+MbKގ#N4r/ȭU;D0[mLv@ΦusݍG,D"aD5`琮v\]4hg!)dz^:sj} O`moսou<^2amĠԀ 7)!E4=&x,#i;’rGԇCI7Gע+;;3;fA6bԚ{3*7Z<~x8f`V&gi@렠?1xqEJ$LL3Q+$Tp}KDهeN(BCE CG)s}}fK7Cm@J=p~8)d{6D9"B62DjIGv|C”meB⯁gWY=xtiO-꺁TC>7><`nV {6 ;3y{41Eoa*sНA6˝G]Fu)cVm/d`U䔵qk-NXi|wf5Ռ1zl_M]/4|@B/9#ռ6 ZIŭUKx|=q(FRX+DyK+] SHJIǶ¶GNSCL6JwHD82sE>-=7?NnY^ P LE xJm5}O\x(#ŭֲ:?{䶍 KOqyΞ\^֩)^g͢c{+4$JHIJ(h tX ?'Xac{\9>ZT_<:LA ģnGӵC-[g쥚xj|m̃NZ||[^.(TCtUbua3C${ґ1_r3cd/W4TlZ~ ;Ze8%#J y-V/{I[^`B9>=UN%L֍=jlv7o*u 䈱фGc3&|@5[RQϚ((c0B]{fTb8m70'KfUvK!C$m]DLQ߬ ]:$%Sg{EJLNēwel^Cf: j|:v}q[%YKgH,TWbYs,X {v.{S^SRu|N,I sTH)/c=$q/S@~3Τ_zձ3էէPm6ls w?AdOgjKW)b~&L̯hMI狯W/ahX^,9cg{T =ϫVbr?wK$Ĝ@˃KjFP,p(m<A=^Ԇǂi˯jvY1Lz8[܆f O=xB)u6:@?ǧa!/cSeRugWw1۵=^@M#7;6Se;z2]d\نQ  N  /15$ %GҋK<]{q1η*D%=XrIa>HpOzu|t#.aBיϭ͙/c[p=_D;5m^Umn]E,kQF5Ұo*ܤp>xY^Qw-OtlGfTsqj}[ 9pc8@1*v3b9G_ ^( ~7)g|NO(S8U2Gw~Ʋ):n^t[u]DaW駌p\`C+ ŬOMap (,izԲ#! ݒΚC ^,uPI{C.1yꯅKc0;=bz*5tdQO䘅" ,] L Τ>hNXRMP"hQr.m[B}U߲#vET.) #*:50&òz.0 _zWzs B'!$i;bC%_mu} JO3b7et{.߲{#&H{;'0;f9_U븖L?uZh])30URʻU4gQ`$,j^'Em?*xyeœre#"8knսZ?^ճC?`9˒`hfYo3jN^t Zy͐N[ VNE3V__(Xջo7gכ2CN$ ۤzu~Jfژ/xE?nsKZa.9R| zzs2u<չĆ dV ~qeN|PC^ԷpX$.W=;Mac5lU thz7 4/A`Glv<|ܪ-:${<`L.Vw+`iR8@M HujlNAʍE,_ :֣Bû+#۷X+q5vt-ĄӴi!!)"Oae2R*EJ$-b,%4!ɓ\dL19}s!{-LAE@#aOʔyW. m:0.jQK#Bmz;2Ҿ'#͒Gc=$NwԒʉA ;V7K~a,1ϊ'0Qb p&{e7h-Q_Za%Al{1(SkS캮sNI{LL<fӃ%rĴ i,J-uA!85=n bo=xMP7n)MFG-~A>tsSk6NڋݿY,%h<$L4U;"ZC -Q<&mz>t [ s=?W@+ $϶$9 J@LAI)]>.T(.8J "HiQyL&jDŽ  -@[> p씦b!H 34g}΢Lb/0E0r*QJK8ƞCwl +;HEO4{4~spKy D' shAˆ>ܚ,Pp~4ˊ9:8"dxs{aZ AGzSFAd3wdInх.ZEx拪?۪+[_D6<|zlJ#nE0^O?:չ|HS5vS-VӺھnM5n7EirG'4~'n=vC޵]?9;j ua>=;;2}#sZ)yfڹ)X/2]t=.}K؟ѩzjG?eTn\$u |w,gJ>׹SR> 5~gp;7Ww]f{_@ ? yqpc0C ! ^[6DfaG7uZӍ$g|J1><'Otn3; 4cj'a>j'̜8L)03 cu}+d$ đH i@ (jbȯ!9Q~brΉf 9 d2n4')XLAB!/zgg ']n yT9DmsL9[+9HPC*O !ZzdD0 U2T8KAy~$e9LYA KOf\ΡGho8<1dUUUFWX#H)x¶A\Y UUVkI{)0#KE;_e,N (Ȅ)rd( dAINj: #EJR1i٠rDFGݼMd\t8/&mL X avv͵Lp2YU.P+ j^e2=elO>R 8x*Ds Zf!:ok ⣖놨$U;dTֱ jQ4\ҟP ؞Yy{A07'{' Mɧ՗},3!8j(=p>GG}b٭sJHQPىJ e:d\x";r1E>1$٧`쐎q&:'̂Sݿd2h|.;I"2n@m$瓩Cx;iңB jõβ+^$Zg4\V @enXzCPE8jZ^m&cp3A% _!c+FX"Jhwx[}v<@Pםh; rfG;UFQ/tH-\Gi/WcBػ6$tQ0=\uO{gkln E;._5IQCi({iL W]UU]Mm f<#Z~/]$3>9>8z"#щbo<:meSG5 Vb\J10iHhysFQhXļ>uߑ0*9 |oOiqޑVNeө>U"͠:3fmۅ͗!FTh9T[p$@4Z>i:ѷ`2F%6+ aił&0+5ÊYm"^Y'^ZdlLpsILZJ_}7#&Gd' 1R/Lg嵪()d UHo0<͂砹XXaP5by! ~`b q [Oˉ GLByCrMZ%(;IŐGZ*}a5b8IFRvT́a )U!l59PX'b]>0" @T@J+jQcGɘΩ70qj[NH45ګܫ_%~@C6bj nKvH25a$p*@1 @AACa7qQajAؔ^2xSˋ  Bz:?/h~:ʛG捣wyyMjY O`' jp.7zO^}F6^lƘ~ Q (KQ[ZaPϠC %<ќ]E1@4&/aM$(g-H$w"ArQ?LѪ*FzuCwlq߽xU-Kgk`2Ȥ{*0ZgכŪhewwWg՜;aH 2]/I\huxV==]m6ۥq7(F<ϻ*3Ap1Qf~Qacg jEyչ"<_+XU+?Ĭ$b*hz#xGK bX6wBg|^{JFKXUb) . Q&#bgHȂHrn-H=ô(˺ݍ(K"jwL% 48%;#B֔4.:,DO{AGnkZ?nfSՎ1Xd,`.o_c467;i*=4uҜ!Y)he9N6OPlUW? i4W5UuD׭KuO50f_㪮5@=t3'ХetKzxtC?ֆhqćC疎Vm5ZIO-I͂K w"(O)D7\)_T '\Ɉ\Oc Dq3wD?rHFt.M4@(4AxMly5צ`EE+\Uu?uze3,*g L=Abϸumg),nR]טZLpgjp,q̚6 y#!{lI;G_uh6Oٺ:XHms;'0𓅺v]5,Pce(Jefxfs,2"%*sw\H `q!V X{w6IcpRjtQhmRXa%Jdt[2ĸ}2 bbqnTAaް˨ H:6N|wN86AHVkbanQ`Q]D;VZZR%$ G]kKxO!\2'b;W-PP7{[2c5.VB  4}pZhWv)c,(ns m0`PtD P5sIٽ9(,1V5FMAĔCl?|<&ḧ́Qd¬8s9d=3W&0̈́s{e<۹W9CK!R]@cވOb̑QphjFV.H BoB0CxY3a)*&$1kt"]_n3w+LZ'e%a>y:qؽE~y?׋Ž,[vן/(]+ȎĐ盓ͧQzD5k9wW5Sz 0i">*u)p0i̟"> ce6v{ﭰ}1٥y8]̿~JSՍQ'f*-g|pi`amށY6ObrOëc^nHPan,2\.Mj?y81WK#7 -hQ\CBPA L›$`<* /-6N$IhW=GYP9:!VTyulpJit: Ih@Rh1&(VJ]X:mYQV&ep8> p0(4F(4CM?&e`Z 1'ODMU.ǜᕕcAGWaE!i ŠB"dEJ|]~c_1M}!"W`Md(g-${B"]i8.gGy~$5d4v].b_>0:rې.M-v<{Iⶰi~J||q|QuZND1kIӾU)Z]TGU:qqFaShUII TSpVҒYkX!8|L ÂD}׮r !/3+X$$Ŗln= 0#db9og?2'y: dvEMLcj$]@>eoyyv@MZX(%"qMKWh\u \1(JZkکyr?9!XӉ=g?p~YP{/ @JfkI|JXjD Gw &i*6㺯u  yTL*DGe%8ňY0Ny8Ƌ%og_{.i|Kfjfe41[Å?ۅgZ=LwqW2*]GJp*k R?8̨DJzcAKI/tC^h($ȗn.ȠTŲC"1G8 q?U֣GWYz|&|RTILdF;02G6z4U lZ-#>7'@fA4TsGlQV_`,|UIjNef'1b^$ڨ<(n.%4e!%h•V#B 4Ynᦰ#m#"Oo^d>'ᵙ)~1:@ā \Oh{^QC,hB)#hUKV ɋI?2Mز+X9c/͓aXe(FXkqx "W L0k4hX|? ^5\t&>v AThlu]'WeD>0% +pcpf0'S&bڰ=o[\'ސP(xpNKDeRVxݻTن/1L[4bі1^ (/ 'ՈQ۫_2h/8/ʙ.=YC失BC/I(F\pN';[A+̋-= ױI #3I4%"2J1 &Ib(~_[FS৖xz+ot=h-=޲.{: .{N<խx"%I^ήaFHsr&plH {Y;Η*#nǸףGWY]Uڎ"\$UrO%5s8xI2> 5†]ԎugK^r"Mk9A[P<F Պ{$0/Br CSaOeL YmIH2H,DHQ"C@=%ֶ.]nbd^&uM# yƨY~Q)1w V_]:0 %"b _Ը m}$Ȓ[4|"EU^ؾ׺.aHm8NK/1Kw^u^йz[$Cj8Pr 0(JI\6^$ @c\*72q Zݏ5TclCׯPאl&**e".:1`rkqಐXP֞E-0${)ttpJhbVa܅>]2h d!m#Ĕ"ܥFI3O.t>rTp`% kAwJB>UGĦFe FKlyIR6JlީW53pH}Pg˵nHC/mdc6S IhC*% UE9[h:a$C@yhh@o/w1Lң+rȁ ߉P9G CHJ*q(D|섡bxi!NdVI 4#\IKƨ'AsM).0Jd*w&$:g Z[ٶx4/&3|B8 &x_RʞK]yh{9x rT{ vS4 J€e!)`PnC{$? B: &-* iT*rtu: ?}rC{#}eշK0>ۨB<[,ޣM~nj֐)@pֿ!40c(uh 2*rc;u0^ˀ&eK4asW\aTYH4 D\KQ\wh騘\[.os1/"M!jexPSEMΙie1㕤~()|Cg(Q`éJg0BTml 2: ;2^&1#DJG(6ؘ!Z y|6FB=LVs2UD`?}PMҟfZ7[=8mG0wq*Z ѳI2pO3Sajۙ~iWCT3['ƛ:5Pb-M\ܩtY Xx `7>~U@<+nbV`=yX3q|{"w-nky "`o~Wvd6*[J?s8q jb`v32?t<џb齳M}Pm+>Xxoj#OLWI[6R"NyqV?}pVOURC/Ϭr8Fjw{EoD|~nJ#"gY1~|b}dVg-.j2H+ʫNԣflOhq"i7ŗ9쑪7hgZ}y௣1I/kq0/~s}9ejHEmg(Rv<em A/H4e$~2RR/%Y6>ex<0XP%"qM\Wh.hWP)QZ]d1rbəCfАP z`Jy a c{^lynHodb*f] VS*Vf~_ddDfdVc`zh6H}$Rto)׸Վw0L8 JH-شDZV%ŕ,3o-ⷬ?}nc~x.kWS%Co'wxHť`f?ۥHHU-Kq$ pڐ>ꄮus arPk Sy0eO4W>_TK;_>I r0Qax cmdepDZLmNkS {-g@;>4!n(3G(=mdos"X+V)[cε*5V!sX rH1d=mdkYLx+ kXd ."5񖨹̤fѴ-.FKhY0~y`k2|oqPo'wcMǁNkay)}" DZvQZjo+ r*Cݏ\X}0 >?ۨo:opdHb!љ\1&VFyeI!5.XFəx|5f:1M~=,WEE̟JO[%?w#m?]{DZ2J$m4o9~ۢ9^i`bC3#ff`tf^h'3k$uxޖԆD:E:Q~V2@` d1A 1Ky%[& Hd"2C'VSWAn+:e 2X$,c]sXUH2(dBD] Vg?gZ`*#\[4HyLiTbi@aA%ׂ+^BQ[*. h ]vu$+]6?Xv004ߨ*Jras D0x%H4e\y%uukfî"v;9(tDEIׂ) ZLL0Ő["DZLxa¿h,2 K{O&p,9DsM$@ U ZEjeR:@x^{ĉڡ cN MU~hVHF\QA ۸R(JAYZk _!,60Kwiq+Q),6_u\9(; #&jIWM{O#0,{OD7/|R U;O$WU>b]K$QZ* ҂sDAQDaL8,|z'aWTR|ZT՗h~<|Pu󓧘djx޾[߬WɈnnp 6Gt'(2Ԋw mؾV^{.u09`d[,ӘE_:`,R)B& K8؂ypΆKP(dЖbZd ,, rψgy<ۏ3e+*2@OgƜfJsg!J2j-qMu0ppf|o~rwwW^z@|ƵVRj!KI9*uրP+X7{_ָ{p %g$`hI,#4brP(pBF I06\Sp=tf{60, L)Fë#\;<gis/&@7ZWrN~q:`Džf8jk< " g6!;ݺ>EN_x2I(hVn>!>ާ#ΚekX[ #㒨MVsxl BJY(+jÒ/-jjʾS[\/> 8oag.:O ]Q61R*㱹].y>_UސtyA5W_^ k0a6 swƣfwS6r!0H]K1 0b|Ov\ʝ-$J۽=s![ [G3E @Z=[7ߖ0o540XMd,I[wA#wJ*Y=+z\!+wk@ H  }.:c@Pg }{&0 $ @`%RKlbFKOKd& ?d( Bjٙc3@hL1184 e혣s GŸဵ%IYO6XWK$ScN6CK`Sp^iv10݂vd. ZP (/}CRwD%ϸpd=/ers0^ufI0%(|yh9|")hLj H$/E&I.Ff>גAN,TV*B#W!&DaXejs Y0y.=^ Nﵒ©OLM_* ͏G-L^?C™,?S`EhBRB tAC~_6DF!E2DZ-j с!2Ѯ:}BN̚⥇$ qNӆ=.PfaFL2t uE8u Y.Tu,/ %\woP/{)L0̻r>NJxi6X wRt113BdgG(T2o?{s 6LjDATlKՑZΘ5?84?|cENfaƸ)Cp|Pi_ R H4i<~a4woky}yeJ^CNhcƣ!kwb֜TkPyl02^7,V׌pŽQn -=av Yp0/z7Ml>gc@/z0^{yAVX^5LxwW5@ahcp %%=zxf׃F-ߍ Lb"36m>:{?I{x/uGZgv7aԨ h]++|0NZdz%QxomB /dhda2}7mb?T&F6LY^gkn#&?[NGxWt 7ko\jϰfwMQ6Ql`Z9V|kamp.kv7-/5/fh?j[/3ȺsV;7u̚BQ϶P[n?z]lZX5{t m@>|7G3;: |8NX\ y;="/SxmwCa6ϲCeֻڞhUPpP73h}YaJ˼QLve}׳pcxrćqo;Sq傗! - NF!PIhoI80CΎbPh E7hxeJPڏ/[kR}KxhQE7ƗͰxuv G0)%qXg.}J2b5j"YXM#jbC&hK==O|[ymrҺ*pU{)̡?gɎ-ϙ͖sdhZ:8y%{%/êfm66%+Gwr]k7gɈΓɕ+ɝ%AEN (͞I`q$\b`szHپ|LB}ƙ81IJ):$I9CNe Wy^hzF6N= /YINK`j^z^Dhmr$+_7{8giq99/9BlY^R^UJxL Q.x0ڵ+'g.m_o%g>K>xv)U٨#ڹZQ˪#d28ݕki@*¡DJ^&(=.͂j$-m\Ӎ&6)jC:@ώ%?|GZ~4T .AUp= pR$VEcPԤ z78.QUِM- r覴%^27E%8D6%].~R%Ʋ*)覝K@AxS=y!*Y͚TKfhӸѳKAGs+ *oYx.]Țax١_NMˉ1ASR30͚ ~9 h}1]/'%ǬrRv_/'Y(Q" r^N+@Q9?wE=p_NkĴ捛ˉ{U2dssL'6DnD XrB||tq29qΌ6ǧ$ q `2m\-11˷h/|?l4d?p_]݀r_@wW><ϟfû- K CZa̐;nAy+nQ-vCI(!2Mc2yO/^ 0?EW1]-}O9fs"M|So{DxK)03,PBƒ8d^y:hXBZa]vod}w[&#@yPѝoPx`toGVj*\HTQ>ȍYQx=*N ̌FHXn0`qBP`>d.ki (hE2I$CX#ISB)=sxg뫁{I(ˑ[N,Jç!&wa/˄׆(5U PHa]B>UŪ4 jamZ ܴ|y F+]u1T06سD6ӂ]s7WX7:]U]mU)/cI*s}CJ5䐔hq=_{͘/j!y`Yo+uyRVwI<4V=e?7xq-E ˠr^s9@b.r+ hy@ '-l<8rN.<`v<ɠ٨JHAwX4F.|] 7tzDI-Wrs?wU3!ɬJChXʔ k@9NQ'E0̉S*d烈V \0o"㛿gE؈kCn|SPւDFi wR!$4JL:D2M!FG>Fl=OuD2!EPƒ_,BH BhRQEd\i:P gu%Aĉ90BB"L-GD$B[L:PZj*"xMR2S4#еhS*n\Ԗxt&u#(G -{ a<*9suseDGARJnD eU"*v2i)) $C 䎧Ȍ%pYF,91$5DZ!cw6=c(^ w5>.^j6V6΁? =?[Քg^6HBm)ʺt̺e LfoM Tt5BPrk^DjeR "Q0' HLI0!ZQ( yRB Y)>g eɬp=Ĝe7a/`¢.iW7!BHB 1't(*g3 ds,9ĐgIUKUd_/LEC@Eu@eY^ j2AuRQxG-e"9TsbK_T郔WbR`t*gIQFyEqX`T*37 R2i 雒 "KǥD *S ̠+T"+6 iJs@j-!-!}%o/R6)/67)pId2ך1&]z\q>t@=p` )R`{K\cO:7gC7Id/nᬬix=݋7>u|}v]5&W"ʯ~PA;qקGAcDu1џ^`4̜ۛpp~˫oo3C7. o./B"j]BA1?ݻ8We* Gnh;ߩde/mă^~j~I ¹ȝ3}:39;fpW&(Yf&J,QܸIVY5fbϽW^1~}S+~w6^]]v6j1sGGᤊQ_/>d5_2_"LS޿}+8/C퐄aϩQECUU@*+ %cu؆ /IJ݉$.'8 V3ܝ`=Šaڰho_zCj΍qsITqqY|"lݭ5ەA[avoCaL D_94(HW{ȕdhz}Ms?QS tByo fWw^8ۈi/*Ί1c8+p4 hpU #s)"S }C鲶!`-}Ƣf&ͻZP'IhtUoeVի6=`%܌JC_G#ä79OA?&A<:_H5r] Ype#Y!_+( _⫧)M|@>LzDUp8Mqcgf`H?&ۭUDeOWtlr$L0tڧf&Ct\TRioY  nu=ehS:c fI?ׁ\HW4vha)G)I5MVF?EU(%H%πRJ$?GjHd Y"<Ovs)`Q6!lY&hsLr6>rk*$2+SN*h"a9cCf,(PZuYvW\d>Y)5e Ce.:1*|ÐĐطcυF+Њ;$Ђ%< eHR3T[Y;\ e7;cA(p5,`c(b`!2.,b0' yh)Fo c 9IR%TgK0&1M[^PZtu+rYuu[798 -'0[e.[L5Vdhzk|f%, r)3˵F敐|t֝=łmQ3˫t7, Ǐ!t sNs]3!mH;ekg#$MB)c@ e# "&="KrFY8| VPf͵AuIsD*q?KÚ%\>SV飖T,AxiHiSW˒ f)ͿNHTV:n5 ީgF=Ҿ{ẐvVOV0Ty&AdD*DeqAOE[<}? S/0kzn oFwu3Rz7)7N&'\>'7n؛'l /Hmyo7֦U]_O]@6n{x9+RokrtQ- ,z!d_ BdL@Y_PA͕0.|fHIhy$֤!FJ@*Ѥl ~30cKľ#V7p}sص ]AsX3mT~`KW\ ?V"Ф6)t]{x]@Hkxߎ_?/|V"Z2 .a lMFَ6cUH^6o7o1^He#^lg!N$P.҉Ezfr6jh^&+3z&B}z!qO 3FMk(M%w2/mh] Rr=\nSC^b8rqmΚAʻ$d),r5dRP"ic!5nM7 .(7fg t\NW?m(B<#J&'mƬKBf\œ\--˜!u,i9no}ta#\ XzY^9h~^C5֋n`kNVF`<}F?EO̸.9l%W;:!Q}+ 󟏞mMDQ¯$hx(Zd(cSQxz@GVzU%W-n[~EoڷIFeIZima1 n0m%U}z: (mZ#a$aEIsVj[Fwr hMg]NUE^"Mmeo痏ˮ(qGvJ58`6WGG˸9D?Ąꆸ6|S8%'BfF٭|Y7yN=H V=ϓ+@N;=cu[TȷwYɩΔ}/JMy\Ed h_gbۇҖ-(0-eW1OV ϞpcdH}'BpR]-H؆|;F"_%_tuJܯHVoJ]aP|WW֙7, VZۨ ΗnO(c0BcIK.zQu&prʪCrI[W| pӇBDv4Z΅8C,7m':+>E \E(ggE9gL^SzE;]ث8|Brr M'o+kGᬯOI4s QRnEi$ga28h)E >0}ּmbs>UཥVm%v<^]z>(A9&ٖ** R9啋5l?Wpʁ/kJK=8Qy T+$!|C pőDY1,cϨ 9h張mVJ{vF,OvͰ*mk(.yATc)?ɳIwy|{ }g(wn3 ś⟳ŷmbíoz[\Pf̗-G}JaWbVV㯖\ sg s,I"!76I`įeW&~LOW%r-~fp whʀydo&I>aAzH1B0ֆtS_ $ -xèpPXV: wy= Lfv۬qCɤz =n C%eRsί\Wt}4+K#H|n[2a|M5ῌ__o]z\8^~'],Gl.V" ǭf%[yG7+ne=|{y/ɹb2=*rb,+ z&'˧ע8Gi*ߡi >#L4 .x G5fYB UHBUs$=2;& 2PŪْ3t:[>,Pav]\OL.~k;s\ GHB*Goz}AGo4kfʣE$A#餲# *PF01 (\%R2I`AW0QlOW-%dU { l.dLRsƴ`ӣj,hkj&Gvme?>pþ/ zSyLk&T4heXgP.`K`S^:h8Ty-&('0Nj%RIM ߑO7-p?xi:t XJQ?fWܘ(8STIVU` )C҆@"|$M`ڼeN1qU5Q2¥BTAܷ%L%ZbD;2yB;!:ȏnU!=pmnSF |)|i?:} lN`wSu1\a bZiuMl2c uHy)e[B[7$X;j!}`*Y\9zQn-J%լ ^T?x'9KiY#0 BJ&bA5"V`1m,klkbEf)8y5ռ&wF,1 & o.'\=XW^ +ܘśf~z9OvmgBϻ9#"^d!("@_M:d&rW|@*K>^9(+<2?ZyN\0G^?U\&g}xrMYpZ+bv[9[X\,7g13;tYד31sY,MDY:?@瑩wBi5F83MI@ZAqXWܨQ$LRHBȫn.+Μ M^ bFpG}p/GBJ-8""*!(`ۦXq PB3Kz)ηIM*e!Ը.ecIFɴkd4d@}&WaǼK<+O@ ek0"1K`gIW\6L LiCD [*/4;`^U;f -285l\f^iNB K< K*'DӃK3+Ka*W(g5\y`JI1Q:^$[a@ J&Ռx V:M!]u}*+@lwrzC4,q bZ]iv`l*Ŕ~6b?4~I5K,ЛT_'师юHP4#u 4H*;sSR"g5}b*%ñ>,,N/|ecr9c-ؠ1k׃2w#۵X2L[t.j'n$~ޜ|B=[DgB̲>U={zvHJZ=5膃gL1F SJx8Hʧ*uW=5&hKf2R>IF XD鞵E/ rŘHÝNS:M2DpA HeR#:;|fMc iΣq¬0SWa p39!6 wu>k B/X6|[29ᖚJظNu T(58*d0+aH9HYf91O[#PakL՘`7LX y3Sh9{Ml\lÇ$2hWR'i5¦3cU Jc3^[Կӌy@P7@9`8FPSP!: ugǫy~U 0]WvR lbKzvYӃ~eoQ?mL=.W9.%D?{A&RuOnQItޗ^W ԃyYqOv}if㊉O-DJ[? H]Lӗ}8:hM]8_yM3)TNty}U^^?]])Ĥ\ +G>~ԂϷ̥V]&p4<[ް/ Թ\/kcu?_Q~po.[_ +kR/!`{FK JUz5Hy[`YKș5ygFk~CVk' y XSEP͢X\ʤOԷ|oU]֧&C{ܖm cT %s(B/I&A(e`mAQI5*Ü OUyc 0*,lܺ`8K2+i@ZNEmXvᵖ YUQ4Fbyka86|^&8CYnڒޕ6r$Be׋)*k,vn3/YyHڦT"):Di m_d\:0zQvߘ QM,bJńc(~ȸ[!,[WR'~8} \x[`xk0u 0,p7͙p6 7|A螙:,CRip*LZjE5 nǐYb(GHfr8QqhR [K0ۿ#_o}:(xw ԣEeak7,Ė1_:B<[Cb"yNE UGN0l}<,. eJ,www\~$LQճ[|l9 V#]ƽS.2wLW!i,zp8L%yw9JewxV4j@bE=yꃥz2J\n]*5ؠ/Sh9)?pSrza 5Nt0 Tf&cʇ,7^dV$فwT!:bAKxkhYn ^8=6y6zkvKywEsl#YҎ}u~Op#)ExK1MYGRl8< jY@9+wf[bPbqⵕ+\7ڝl߼\BH_v=OL("/ Fc*K/#tp),*'Os< =ɨ(JĀ,|vS(%2UMg-%xѿn..u_a#BԇZh[yՒ`۔};*j6%Rvŏ9e%~(p0pweџ(I4} !V, CttDWG2X1ڶ3lƨHA۶|.HS{rWK5Hh&XǦ`gfY^lf+Кqxv;kyRkxN7;͖AXLYØə-zV9ixkBiFEjkyΣ}BŪQab:*'H0y'cInen\<6 Ի 2-47dlV.(驔ꩺ{4xr9'5YZ$X򺖃Q}/x)տx; %Fj =Mo#^/yJxӁwy1@|˺<D T>9!St>i9;7g|skVz qʹ@uX@4v8_ƒʒ8kIrd&XKA_]IAhRjdǀ Nu=@t_s:h~Пs"жD v 80T8 D}iju`jU9E)q>( Ә: Ә:-'n-hj79j~-)RkLu^פvZ\Z&Ԥ-4|Q/? T'‰j3M>`ɱ3mN (+ ,jI4lz%9%ݾʳMI?7"XwGc` |р `gЧ*}ʒ yޚXR.ΧƣFn=iel[0$V V`7n`pa.ap<01V5dFR͇H)0Bb7wK?e$xqk5L~\@w}]̟Wp4]6SM?E,M e,8r_x\>|U0AV̖3-hqw./}x\p\Y=={ Y3y1byFշq]7QH4`k_ Xzg&83Z)dg$㦿y=_qHM,opFw֋ϞXsn`I`oyǚ\~Z">*'k`tVIdE!10@fg`U rH^Y-P/YF΅ja.`1̃& 6n+4/͠~1..6@i=E(JÕ3M^̞QS(ZuSD*3,ng#%1sX@JB>TץG 's)\/vOYxe#0iNb-&~ǛS\5Pt 6 `rC$g,pWNoTP@*R Z*@'Uۯ/i}F<Lfc qF"Dy>Lc;}/yL 2O6eoONAAfKŘQ?>nXy?}_I>3;E|s1;r Ƨ&XwЇoff5~} `8rZ-0,&j)x%y'2ɀ4ּ Ncaƈn6 s>nc!c aAtXh{7B#W7b-Jb~ ēw7ۮ|[ZO@¿-j{`IsU N"ͅdp9X48Qۉ (41pS(FA) `$8JQ7(%.0BҶ AA w~]N]b9[uKDWE/y@ KMwԏNǿj%0(&\pzB9mY!W챢1)CE|$N?Vi Kp$FR"`FLn=C[Ivbo°@-Ke;LTSRaаJ X2Js$JCj:RH;'{ ,Ih fW%C00\ד2䙶Őet"BbX.;̃@}t<_"!{ڹh .6Ѥ"n ";< dJt`y j% EZσkAxGD蘿p\< k78ee\2VkO0sv^:%%>wa,V`z`b |$%D"-b:lњ3T6P2 Ga8e6ϴ3&FJYu'eLdqŏ73OYl: Li7o˧{Sr8:>GpN%wNrU-[\xǟS͠X-BpPDcٲm2ظ#WmUilZۇ/bX@=1Ma(2<kHR*{O[b,~]$c~zHG}^ X,xA:s֒lQ8sU/A"m'_9o B :z<|P ǿWJM@RVn}+F9=02c C36(ɴ-F=#Z `$H "96 /r.P'5͑YgdN Yj%8EbT3dA tp0KoL-bsg$mhoU*PhqS Q;wXΔ&1[$݁ƌg#RxNb*ar9y[1`xjU ȼS6z0E(8cG3/z"s")7M ӨT˃c5P :Lܔ̚3%2zU;ޛwWeD&v)abH232%-yxi^']֢gLm]t=urÇ̡+5,=; K9ʏ`F3?.CdyMd뫒$_?ãI>%9G{fAdjw +m$GE` 5 V,^t՞5m`J3%b:Y.(["'LY;]x]v,~]oݮtE!Li p>KIO\ Ch'l'~|,ۅZx,%YO ݞOON@*`ьp=IQ1Ԯݨ($`@aE|KdwcCOW?=ulH@YyW{{uȦ3:j2WH-vYE"H:ge21!$c=a@@ԁRn/ڿ{u0x FJIѬwCly~uvҖʑ~"M77on>M NEKGiF:֯h*0 NdE`[+T';AЬFO|ӨBaeП/Pwe?Y}p/:i:գNlkzٳv *y. ,d:KJ IsdAYsIڱNi1i)5y#3,_/\weewZ_05'8ޮv=  Ԡ=h}XLٯRE~,YpB~9D09]8k$=](JNw;&:-_H;xrzR 98R/L$g5ߜ|/O^k%yDSc4DM~|5;!(z{u=G;BW^UH\^w5}uQ5B5j7j2@eMݍ w}@{Vh;H`IP^7NyoL ~T}.{?wu\x=omp5&u_Ujh\46%J)^7gF#ᦏi'9o& ~NAhZO}䥘p;@wυ(ݪSw{Q<A_^sV`su\5>.R:c,z`7j+!TQjȹ׼"Ū |Hy{uZX6L%n >0<\hSIyWH[˸yS K~%, `v娄JDrGɾf je]6&Ÿ LVBi#9#PVȎ$Y,e1&74`beۛϾڊt<!B}z,hF6x! ?^Sn' &fӆ͟:/)e?όӎ{TH=>8Z4Xu0lVFl&7}H1~]g|eTEE,_!Dc3!мI짙vICB=OB)j9NTzJI=v$G!ߩd}C+ּ@sO=m,!{~ÜkٟKtbf ?>azԵuL1???oFW5\oD|I8KZSeTLrV1ӌ'%}%-懚*oofseZiwj*6*e^Uf}a .ₛW"`ǒNI侸n^,W\ BAf9FUPw%Zg=xyփM=ؔQkBDa&KQTJF^0f*cRp7xi%G}X߸`cI$>޼6T*}AyuJ ?NNX%2 2j0U ţq>U VD Vg>lV,_ޞľ=*%xJ;t{n!vhmlc`.8&H>-t>e=6Ca}|ҟ¯ZǕbgwfeIէgJ(5C 8x3[GH- 331oJd3￾;#MJȊ%i]-h=zGkaaVSu6tw'Ղ#JTp=+q6^AOh]erR>p\V :rϖwc1&'BOZʀ ;ΪKl"Hr+{D.^ȴݽŘ`>$?no/ q. 벽4_[A},4Q ; ZKl{Ldbr37B3 ^WEAas%ϒ,̏m̬h1ٞq #bkY|d-nqJ&*F?5NZ^~46m mUX>l.sVB)۾X̌5*-tBcpF!UyN<ɺyPus4ϵDavNxu.IĒO">:?= wNwϙ'Ta2yU#tG)E}5%;X&ݏz$[B3!`9챠لVMv1 ðGD#BQބY|s.4x9>09 q{LXF>gTd-_JW.7{v*q}9! p%KcUbZU+_Y[O>@B FOWLny @Qɑ8C[%ƺw=\G!c>!̃1Ĉ9>peS ?kJ~RC$DMrZ'| O F)#kЈ)҈QE 9;Lx s\.uJi9`r:>y:1'k͘=vtQS7F-[ߟ9y2 #h߬?C^ӶOr,q[Rs/՜pےjDLzO~6{݋YɖFtsuqXrqF@Y% B/B9eK= bylrF^୵nlv̺#x׭KƣF`Md4l9Y l)\MNa%7d ǵ`)ddl -N9n4@䊍\2i.&Gw;N[PkT&. `󘬟pN覹 %gyƚOiҭImCMD+1i..mEsd(|:aDSEᑈ1 j)؊ItBH2;2F#a} !fA9Zϔ|oL&)%s-(2S@4IG963s$%r$Evjz;OLz)NS,2vN$8Se0 S3g {8{r+ Xn!8WeAIF 3&''l *nC dU['̌1e[BB1#:B̌sLJX1.:b&1sI_$ S`̓b0k#CxJtҲQ`,`޻"Rx^@ LVC̝@U$ cFJ0_UU+YV(/u6j#y2,7*D_%ÕeD0$YД`ĨyG:~8XV{*6e|lj۷e!އWXЖxyaf c9A h|B!=b*FN<ݻ޲>^~,ۋ&VN\٥- K-a#zDn^vPLfbu91 Gtֶ0 7d0k%3"P MZqc8 x7uf1`aK8.Ht'JxY0U`+ȵLJ1QІRLL`'5*ozJxu@86 % H5g`$pA?=cH `E yR <3 0Fng ,A }0[^7? T6b0@"\lsyU S0ȎY ABLc!'r!xef83’Ble}TN q0(f!dH2U@=,j.lS/ &5enIcLVZ3SrÖɸ`\% q49p U*4U&kgTD2nIo8 jyH+`ǟXEшrf,fl*(%Ux^ ~(y0NR9W`<:h&>}^lKnQ޷K6UXL'btg>@h/iL?nffc:1cxRnX%A^IXkq*4 #$g=Vk%mSGKIu9rGؑźgG,l&̯1 `5L9ī)БƉ}^" b'IUč`Ka49),3??|^ qw(Y?wgxRo_BVRhGLWMb@~~q*DpUٗϓ,<u{Dӻymi0;0u|I/etj)qPLYz/.TL(taЅ.KINŎ5_e#0eFZᐚ ]qЃ +xAlНf/v'=?M&pctjy8Gf߸M^6ݤn6L.ܚOݶ_ \NZ{Oݸ]uqHĈpZTu~Ҋ͙]Kۑq7s"+!to' C-o4;U?CY*1yM<2Os92ğW>BuEnBzSQ4b"# 2ue=hK kYO/1emT 0*Sr:[;7uEaUh.@ɓɎM2hvQHY`=mnyRZ€ {n<YǬiUͯYz<$A;xe4Y{nBzƛJ@#A"7MՆ3(!;.]xP` t̅7͔@yA5iA׃7/C^MvW!mL{Gry2eS.tyn Ό)KEfR@M/C# ). yӸ20ěwyaij@ԛf=wVZ䱣x ׷8i;eut}_׮o WP*ҒIZZ "p%5xN. (Ӥz\_P&wo6CgNΛ=^*ne^*]iʃњ)o4J:"})Ha8_eŁZ@ ZƭB )pVόFSg );ߊVFlwV$0LVRXU' X',|."cH{!h8#ui $ЪzAZ5w6(Sr$NZ "Z-3i;ohxBp*PQCsd [uGQ5 Pӻ?"?A{U Q! 'Hwxz:0`\M1QTIŴVQv;05K Iɡ$u4*\:HU"0eQRP.~SQT9f_8)Zj9))LDZw( tύTuyq [@/"iAw9dޮG=IH>H0&3aAT!weKܴ~7]-q }=p1 Ś>/褫q;؉mxd`U:hjtGMK鶨+!:=+q&NϦFo%%ߴU[:Ʊ4:'mߘub%DBWxDlcJ{hxXs[؛ڮΚC`mҕ` #C vOex2R;E)ԟoؼ $d#.-owMkc2R֎CI|EщŐ<{зQt,?ɿjV&I9P9x"s'VMo]e#(5C sH }uxx`zĦIPefVXh;*d/t'!xC 9жo&x=sU}7k{b\"}Z* BTOvG&S 5}//oS=)m^}n$(aXoKHe}% X)@NBq1ղ*.MlD1:t`26i9^nun"qΞ]p+U/Qi..jx~|ֈиFhT؞ؖ /6S D6"E1vϤd~VOC]ܢ8DW*C4spl%#۟:FG6G/ctl'ZSv0G9s= wZ}i}ޜ'm1AK4,ڊ(BQUYȕE"SQ^ޟ?`n%Ki TyR6TaB󑁲)ilƭo^=s3Cf&fg[ w້tDD AK =BU\ k@ O{'@dHNRk2m|;x)0b}J Lx]eR mPxzp.n5ߚ:H.dɭG-3:Dγ}6{%fiYۅ@5)s:Kn%hRc1~E=%.j`T=PQǽo.Z1g?}돿\{QJ{50%0DAM#TOTl.'ͬ_FÍq>gѬjkǓ;˥/%V˵37S|r:1K,ē1jmKnibWH]ITTCy9=*toTlkب붑MR 4)w5^ KpSMpxD4Ÿ7KZiDrNinL_hp$35mQ/bؚD cyWNe{'Fx%[(29يT4PB]"!q+FW;g^Ui9F?j]-ZB&D[7!8FuxYd,+$ф ƀb*#-ce=*) v@hz2;=% L!:}IJ[敄Ҫ림~qJHxR!/ *2T 62f r n:JwsuTuunP(|;첐`z@}}rif߭j|-U*Д a}4Uv`Danii&ɐ/=I{} N+c~:ra f}1VhP-])ko/NqhWNBY{,v'4Z_9H*^ܽlvҔZQ1UINK!D.&u6D>4.(񺐨dLQ9Oh 'ntK "Ŏ5 '8u)x0 /דo f!V!y,+V0 1/ XSXc5K N{5l6Gd׹2-t7H\BU̙B,/;xdF 񠷅f| 6Ę;GU-V΁V*[X]5XPFގOԾ|ޘ*&ˆ|eQH`0+FMvan2v/BBZ|3W h4={e&e+h_rwWnHDOam/p8:{j aaWT88OP.׃_stnϮrY:R*ш gxOo(mEudl=ܓM)zoS]$$dB _6_O!˕M?qV૪댍pZ=W۳Рt*llhEgUKvUW`%r-Pd[ܢQntqվNHB{%Hh@c@céY{U4fr˼מ(( ƃ:nMt H:GQ?\0]mv>q \ _AY g%* kѰW۹nƪCoQ TEdi<‘2khmЄUMȕx3sV3̯ĄjvF?qJ6=ƻ8q9 cpz$BN͵Dz# R>\::A\=:p&eeDE=,D\rkl!?`7BuXSW )-GՉz솤ջM ^ ?Ӳ& ZGNmzRB9XZȒ}zFG7.iZ+:!q8y8)_ۏ6wGrs cf徱4yrV%l]M51|~Q'UJvT\9Z<˶Dp?oG_}_.>޻qWz2L' :PCnGOim֢euEvmAV,Z;.Dh݊C\oڤ\e!1Ax H&NYmg)Q=r'Kkߖs!MwHsqqՐfjB2!M٤!h<|wS$,ǵ=S:"7y@)sy5 vzQ7886g0jD)(2} d)%)__1޹ڼAc}p1`'ќ!!'66ZS 3{ S2RmkekIR.sXjPWicz^_IlL62汔'ef'u])6yj,8cPFbd{A 6W#Dbq%sfIH2La5n #<`c- gdLB&cY^"}]%|=b&xNjoW:5Vyaa$4-?` - 2|Luݮ_c p}VWh@x,B=>nW I eBZra9ݨ}hs~ia6}>դ |nhD`KSh oh誡WlA'~57׋pqĂKBH=:&Y"xNq>+FFFGY?˳tymbJfLZT6{˜s\ H8мP21&ĒaBmJ2YǁQ:-oR;}xܟ_/ek>K^!\j~BZ/S|z6aj)cNIS1f)GL#r 4a=CњrO3bjCQyѥ<s34p1)B&BDƹEdHP=r[ø$mo0GLWa{7';'K ;MnaB~svb;tbS^FiTQDVzhY̯AfF>P>?%(Ǻi>edݾ?q3xWR(M"=f! j-_m~cADE`RB~F"bWmPObWH%r22IظMN8N ʘfrc#F@>J'l(cuhbc})CˠX L@!|wڨj+؂'Xm D+-H:Ef3FfZth?iے6 ҳuL@j,i2iڡhsߧBgA~#r2;$m=IfJz#w|5X6` LWY:8=UPm9%AZ-De݆p ʝl!k]:RN}3;XbD@YX>N?B)jy* ŭꭆ^XJ2Js/%t@z2#\z@uܗcE1[`a5XѠfv~$v >@=h@&syS"/VneqNV:WRnն,.0A0ͫiEN!{]w=RqX9uG=bU`{ك?9ÕVCp aF0# 9$Lկl2  lQ~pۡ4YT3  O.Ⅶwkla9/qV' z Cfj]L*W$w/:g.>!]{$ڍXI'E-rכILyShp&wBx@nw=!vLR+An))QL FSL6eą L wx-bλ wK|.ð!\HԎRxo^lȩp.0]=鴜: Q:_gZ SΝc0~90Q7O&_wBp=O1^|?No57` /N*ZSrA.Uf0 ;,IpOw7`SR5B +'pg-,?2wu=9;:|G sʐ6tf_ِ1u@ky7Gy;Be/( kMygCZ.kx'ᒨmDVWzwi:Zm8E(YV'Wga Y̞Ǣa"dm`~Յ?'bm. -,ѿݱryQe6ڏʫr>{vY~Gjъ,?H-_0tqȏ ]a9 ,I(&)DX$cA>I8K?W۝;J?hBߎ~Y=?IţrZZ D]mQc1pnUe,mc$J%T`ʠ VkW> |Vd@oӧQJarF$ϱt xABXmV>"rP*P % ktNu6·^PJ^B2+Y} tЪ@EkJ%'2m)"y{O7m`t A;Oj=]IZ(Ao5H@2Dr8DLE8^$r?o|/V9_rX|f+] $jCђv\DW}R'>ipTf'&"Wz> r%yj曂|@mF¹";3p6((C s<A΅9jvCWU30W&%rAn1)BkǏIG LnR ˤu8 KH>I& 7UY퉶&ЅCЅ\tRCOmXͭGH_OzPZn'v D/.Y $=Qܤ!H]k{hٹx\g<3WSBmcpM kb[ZIIUbi1H|D=!BT`LO7;3 WN2Bh>`L`2dҺnCtgWitŵarOC8Rd0띘O%'fݏy6@>χ [/O_| F N s58Ҥ+܂T $OƗE @ҭ?OguI!lȲ` 2ǖB H/6~p ?#k3lmchݜ.'8 7鿜bvڻ鞅EO[Nw+ý7{wvv_|7v{/wwo{ӫ{?hp}//oaQk0|<뤍m[[G#e}S~_[/> 0 I v 30ӅzV<:`{m|w'p?WٗA>k:<~8z,7M|3ZWkŒV IC]yj gL(4={BϟF'9.4tB˧*+)0dҐ1\GYX++Q1wpLrmn6&Y 7Enp31 GQ*46Gz lm>t8q%n Sf>vsJ.8}Şݼ׷7$TQYq '6k.]jTqS5vB AcOR3$Ly$VqN]P;p9 0VP>ЬeLj !& :.͡#(58Y~oe4skg>O-BR@:~OБx/ tlc@ǻW,-di$K 'Yq^%#|'hpA Ye̤b h0% 0 PŠhjʰX󵀅h)}L_6,5"%38' cx/Kec.gMr$Ǧ{'$@D߅2eL͗,e*DQ݂27ey}mJ}][cq^lq\loch͔&׬L]YU)B{K V N^G9M.B.qD-` #h82g{(Y JKOr-*U+/,T C MRKZ^S>a xc2?1wSSt۷o7$j1V|l]V[zqMt!f,|嵚MXjLI!u " J%̸ؖ3k2j%`\u b0Sv* zQptp:2d*$Y7TK :!H=Yx[ҽV`\fFN=50l4/q/` U-!+&\g)xjgzeD܂?@a)ô=CAG8("14.4]KjR]v7 xdz7l@/@a"֓[Fd0LVELuU9FI2V9i! $S.)bb넢Tr@* z<Ĕyr=Njgu͎T^ӑzI~+ π `1dDkWj>.dR"GkB\"@`1 KDU!Hi)]G2w"q?vW0R`i*$`0ơ ;,"TӌXeD{F̽Ƙ0ؤ*A 8~FaP"z-]-}K%*CߚeS+p.:Vt*nYeeðxIgvoZT&Qmt0tb? 1gQfK0;ˮic|z>/,igya^<;ˁ#4m  1!,"h-gዦG O3@/ !oƫrGX}TZJR_֞VgIT!JTO𠴏.~b*|Kbf8i&M$*#íGi[\61{QJ`EZ;Q'ZRM#\8&H#.?d#ܜeK pP7€WΖ|yڍ Np"-]FƣSRie/9;b VlM es.=b%+>pF+ ^ۭ9Ctm[2#r)Jb UN5.fk 6 ixa欮hݸ%N*oaEk޿ha\)Z4EGwvw'Z} Ѻv-ea[D?,dE3 M;Av4iEHS̤XiJ *M1G(Q X,`h4ThKgүPLu_YJA(ͦG)O)7TdV͞g[MHNHI$?WAD"*2Y&)p\m/8aH4'yrb,W^gt&sEˬm^< yq{>I )GZ+Np5*sÜ8TJMm_:0|!f.DN1065ՠHq>z7FyH$֧)Yɧ2z5{?Nۛ]_o\y|-}PI6+ՠ>ٺ-^KY룦,GBo/Rʁu#F׏5aot7]o >iߘ,byulr҆7lnlac1Ge&0jkbG4wr\c- K7vh ,;΅ִQ~º6#ǶsAp^).8+).!MuNi+_<8q't - )+WtD |`9eCHzڳ%Im%ҽw^<$^<^<Û7h*viz:E|FuOCl9J(3{j@)ȻمC 7~BT;7oU9 ];V:ZSCų^>jl!KqeREAh5*dZR"G}K`+9}?63R+ՏIuzW{W{%tZ":INɨ() ES"7EJ!&Cz8=k}(|XoB;A_$V1fT|+n2+UL&$\ c e |T@hrN!E'|഻sOOl{bã'=mOl{bۛ#W!J+N6aƓVv 0b pFkɁ!MZ `:nYy#Sݑ/ܻ0u 0 -P?%TɩX\r`Zf'>q+:7=iz~{r=inOs{{s4wnaŐ`CC+e`qPb}э(R!pr5 t!xgj/u3lMLt18;.}q|Lɻ2 B ER%Z#9Ԯ̯AW'Lbă+owlrdn#4Bc{KRK4yn"L{w4Ӎh z=2[<98.^ݢ|#8~?OG7@$A<ڠ" e !{[&DFFv\v(OSe4}kx\b7\nOfߥR2]Uը=%O3热cf !Vba;;i_ݑEMв1b콗wĿ]!+FNIT@H:G%(@r& P;,^<ߥ+o/>&$m{x{x{Urͮ~QS޷'0Bh0C*B2CwhH˯_$ t;{5BZ;4uPFqʃT0?ǟD2:晲(0mk?8)m\~Ւ7wBfD29/6r3ϤN\UHFo#6EqI x鐖Ud~/z%%mt2hl<'oJcW;t )ES(4 OW_A]1/J=j[ . jיQ˖LSWX$sv<7(G}8lYgc[Аbμӫ=Ԟ>՞^՞^՞S{ 60diLͯXL|RBj!0+)ddc]VdY>#h;I/YPpĀAFH@ç;_丷ݎ:m~@Y_U)J:)K;bxZQ&'XeqYs?3=cⶇ4ܶ=q%tҺא1`,ci/BFhLPb8";~+ٻ#kF'#(Ʋv tj:4ю)+te M]|y.5B]2'xPhg/`Z^֓_O~xӋ'ߞ/̗ÿ8_NiUOD[W{SlS'?Ëj uv:~'̟_i ^ct#z +uڶ?/5dH kuo9-t߉œ_VEreziLzq:ѹN7f|ǭBtĘFSF66B ;LdHít8w.D{g%B4++pS`t-s x0sćqȬ/5:z銎/Z"ttEcХ|8t7+YHV4h+l\TMfwæJ}>dJm٤&I/ NxGOIV&j94k{c(v.2ȧBNãxoW& ^8o|Dp$OKs OT\i5<0pnPqu߁K&δqi]kT7.ZT;Ie&Ⓓ#d$ MgedW:EClcB0aY"*ț+:O:rYmD!D4:sAmQ>W GFWz=yQ|< /#Ԑ[w98i@-PߑQ<)MAԐu] GP"ύ}%"WjJ9x.A8raݪ/9 ^͌4w)F〺-6EO.d\jY]!3K۳B[qOf7 G 1 ؘ\@LC,%N*eN܊!"PѱdЄbmfTEj*}N4ĹX!o %ku'CXEDhSXp!5qJ!f> @gKlRg{\%z}΅Ot AUT3\ ˠcQ xƎJ>)NـE% EP[m]#[.A%=l({Pc@y~7FZΥӵ1 P.WdҶ(99Ԥ6Z:U^/Rҏ)q}+GPs ˛6,%IZj)QY鿥QƻEE jʾ^U$O: :Q?>VHa}LDbjPļsPM|$YA@ c荒.s`Cޗj-'P7|B`2dЁ$rm$-#*rdREeT^ AݥT M[9 4\NGРX {E FZ0!&͊GsU, I3xآ5\Q:]@YL@OR@z2-؎6|Zb[p唏.O\R68ZL= J\9qzTΩ':m9v&;il8Y:;c"EDu+{zQyރEFa$Zqn<di˺$eTՎ?ZQ6T6O=if(a{W[d/0?jZX`.-Or/nmxYC7TDOCj)>$$*:ybLݗm+:bK'9sؕft۱|iIGC-n#pTǹ`,N =cq݄^Ey^# X)fQт'! ?K@ jfiC'N߁K!>)8vh<ׁKrXՀw j?˝z۸(5@J8.{̆+:9Tx1i~Ab0n)p zrnV7{{/4YLu7\;>ĹKgcgn]9}88VJ4Y\0Q D 9RVf"rl<>fr rJK>P)`.\E$4JT U>hm n譒vaO<!* S҇dUL.4)ͱ(RXܘp+)Vl=߰gIQǮK*w5 ]і7R[p9z*WyYi#-MR&pFysw٩(:p;`I< s>P*WO2iՎ2l;vO0Q0; 5.$9,M/]9 e_v2d Xr{wٽzH/SXhﲗg'lulYzrr=5+@='tPKoU\ JT?CIs}N>!{/{Dk_U;Y\._/S-\d"au09AdF~X2)ixa#ͬ$(d1C6ilRi#/f{Β:<:\xSbț܍g׳ˋCVh(@' D69IlLU3@em1ՙ"5hthC,:)9-r}QdwcEi1Swj#Wu>I:u~^y꿿Dk'>J9dlpVZtw\7")nhS>{Y DS% ]"rF&ܲ}r^y\qw8iNx]:c_~)skf2"1NnTkVYTyJ>=&+35rN6`!"9 P9_$ѫȏҟ(*ĸyːooicVGMJ d1,DPv" tjN>d !jB*'_'x0' h{&?dkђ|-ƅ8r] /M1nC OiF^eG%\k}wvygw<(@g;rp|_} jw=_x'[sx#SLL:Y\{f&&RK[X{66S }'TY 3.$ Ӧ5 $ Zm*W.Xa``r'zH(RyUN #[1;A@^*agƱ*VEQy/7 ؎ar3~ %O>XNqWDWajF6afb&N376)y ScWr bt1Hc1aJpG6= ?kߍ`ɛtrU@-7+'Q{:]'nN;،m.j D=?<Ԇ'Eȓ4Q̔ؔQV3-]rE49dG NGe,Q2D/.l Wj0;ڲfVrvo_մ4~\Qr[s^^_r2!f_pS,^v(n(Y?Ï?}*!Ԣ,s8O 2vjHOK'嘗@'eJ-PINJ K7'n{T@YtwVdPRAY;w5RtoR91 eCH~ Sj< o?x4ddo{ΓZ2i*޲SSd&0Zחd%"wY2[4A,m d~*˒}`O{lG㮳*bPV~_6F+zSK_ y'K&$10ëSGtge]^ &,LeT.D&-rX㥂zإ阥Hdi{@9ӧn8d^{(@~e%h2)H$ŐZ<~E(|Ko~<ܣ r0&Nշ EYQi #h6[?Ma_dIcY~@q"Bf+(Faׅ0WH[i GE DS#Օɠ¢Ix"8J}? )"%v7l,MLcA8#`m5 ͥ{-,̓j!N( bm((v8˝ 'rIr?]nWe Z8t<|gdv{p>Zr 4{DH<*@ega%+*' j No a?2VE.@&4ɤX;4 pu2"o_j(%oKA "w֫ isv9p,/1! k|έЋyAܞcs<!vfJq[1 }j[X.o tsHD UK.JE`^OcI'I M$" |4r|=bf׉BO3T44>D~ ^ro5bfC W©T@-9=wP]g봨EdE=Y~ --\mȾ=lSB\MnlS.ov4'=oSgŚ:WMv6Kp>lTq$Q^rLA$;͜vO@8io dJCâHqN6)Nl7+C(rv:#92]`0clG~xYdLtpֲy~y_f7עavq&p ҢMo!ݯsۓ6xGp.+O>h$j;TjfgD-j;ϲup<ӯ<=? 9lB_鼢s @Zᦂjm3 pyeӿgh4oOSqmy ŨJ罹WHĒL|GqpZBE) n> U2Θ4(8!.@3.w(cP:fǸzEtQϫ@',x06,vѣ#XQϮ#փ T;qֳ>֪>!R:TُT"lLE$HĠd*~|XQR9阒TWM]]}8 tX50N%UʓnzWE8OJbG 7V,V]5R[sϦH?+[zG#tGMpd1s6*d|GKyyi F !x?Q: 8ɝZCbqj]x(O/s^lr/>}sx#Mp"M1d}4@(TRk Ţ@oܦ]Q1\4.Ph5Ltѧے2d]9M>Ӻh:nwlVS(g]˟oGVyxnKD:=dy>c]Ju;k:V pٴv~YjX2V)_Iޱ䐡spk=XQVLZ.d҅,3rL:;g"0,?o҉s[Wu'hצ]H[9Ә\)81bl),d䇡$@{V" O< VK+h sse}*[29D4dnϭA&Bd.E+>c525 ͥ{2^%S`MͼoX *e|:{ډ^<5_ϓⱸ]LK4YJa@ljo3]<, 7bOekpUH_1UWie Ar3e! 0Ɉp@UPD)H0 %m9XsüӒܟH7WgI݅.'ÄQ_|Nje\s\u<ڋ%P\khХ)f3sDc%Ж]#xPnFm.XPHyH "yA;W%.u.]c$4?gve uAŠDu|(vJ0$n"רA0NEƚ .4@*E4<%w;qN+)b]h0󴐎NPQ LǂߒL륷Gg2-+ /`NU[Z|vsZ)^>4yl>W8H"OO  q[CTU{F@iny} .ϸ;cSD)$: ݌!}h%>~ *xTw3R Qwڸ6G5~TT3>[I3Gqkad'Ƹm3 Є89?9{wpX 7>l l9$ęģ_QP$g 7I0(ec<% NS9L;o,fY Ia,8j%θiㄤB քh˚cm+NL%rn5>9 ٕ| ,fd <go^Vs\ZR0\r4wmCTk5X,~nC")$zHZM$sqr֡9/*Hޫ y*KAQ+j%%8F/b <@NBA"HqRXEP9u?zrvjp"d#A6#im+<{s圬̓I\ksVb%9nЈJ+;*E: kLLBTp^E^k5/V )eYTBFzVW2nb8`4%=.BΧF &gGArȏ8S3<)"C}ӥ~c?2l_)| )ZmIw-{zh.QYtI 8~' w/`ʦ\.7ōYS| cHe)z<Yo6b{Z f])oL%iFWo~uȆF$E98䴙2AςthhJF@BYG0#g5ۑsI=>g2l4C_E4#"^Q=ʈ^CJWF }f%8q2{w栧3`/šI|Bwoo~9_9}ǍQP]XܯV!I%ټ_qiJ{ghGncdoۨ76BnȹbHC2au=r*$x/0Ha@q!m:픾%yɫ 2Qv>#AbJ0&ώڟ@]jb͌I /$v%#XN&BKXebvI;s[G~||bqE'V\1H9_"{gMRӪu|rgV`Y{\=>ފ\iW 'z/!^(+zy&J"D=c|0AQ[PL%&+oֲSҭLzѠ>ͥpP-e`QR ]X0[D9r+rZDP, 3Ø܍T+Ij{l~)n(Ăm C$ wxąሲŎ)啰duSö<(tPc$-UƆmg/g4 C;pr|5*"~5Bxmnb2>ƨߌ~[U95i-/X'\mkp7&=|۸rdL=1l,!T5Nꇾ-+tF%.eϰrn4o4+%>+Ĩ+DB,1S/) |~UArU#7l>h={m[ %4oAW,uFĪ9̑gpZmm\`]Be5uϿ|} ۡ4{ 9*7癑 ? k RY-cgx*XZE`0* K(Xih K",wB }Pߚ:#?@+d90S?;\X%+8^ K/Tq~1C #³Ģ6c (VGOVgK x#lv>ㅸ{,sCwKH!||LP; Z OO%Zvv$ U֣G0؎M۟KÊ6nt]m,!$#p/4.U=AΗ$IiX}*O㜙$ y2m9WnWvQfEk|WEFz JJKШqaC }!U<]iqӍbD+Fbq[H1^heaF(u( mT~Axal k#RvF~c]4&Uq0aME%Q/ԮO Ҽ:9sOB_0q5Ҡ1b"-YܘK3@ \B-*0 Ex@]SZƲ4@ݹ4@_h9BSu;Hl5,Œi J{E>7A¿\#Tp .׀[hVg>Hȅ-sP[׭f]rq!.<(@J vIiE%Wy ~e;ۖBBڇuh۷*R$vUQv(|{9C{Y|5<A!~nA6E`%}"79)ϙ1ƔrsO\+vn_snՖc%}XG)=\qo[b*v6yԉP\UxndV3+=E>M{)dO[\& ۞ T ,ǘ?k?8զ4lg'Z>1wZ r8 Bd 69)7 Pz sD+y O@aٌ`ߵB HϩE aߚ?M_?}~Uׯ5N NE0'e2_={ZϞΠюƸ^ռVr4>.aۨ7v7Z#z7^W?a;ڿ5=rNԭ.?VP.n~~3!wsoN7ϽoS?j$< ؒ_lp7Fқh<A?t-=<:dl_}> AЁ?:Q2}>G?lmTulԩ <󯧝hQ;X,>MNwwv۹IWMٻ'yΔkbTK+I뵛ׯ/ӲM>Q2{Z{Us*_ZOߟݼw7Wή߾W= /.9{WٓTɄ|on/GeR#(h" O﯂,WwOwu :1M3 )x? |#{?]8"+amy~ ba3G[R_cj1dʃ%P0~i \sUKB  22fȦ,wiJ`9h:n4mj*!jff[i!܀5ҟ5&UսT U fw D]gV$^5(,'FZ$Oܛ {kx{L.fQn]]8IӔ<+#kbkb݇#aɎ<\iq?nkBzd^O \ SD[ n~ކ`5wZ24[+CK租/ll6e,Ԥ"5 OBB2qBiQʨ"N, qf+⊣0@ *DLܩ)Ǿ- ŐXE"j3Q%1;xg a3w^3q;xgExV˼2s,"BeQ&O-ϗ q(\\o+cCjr~n *k1,㍚ߧ+t0Ė;[cj{8_!as{kUu}8ľE:&wn_=JW=,J IK?ؒÞy!y$ 葻 <Co g{cl[ I:ō.aP$0 T/} x{8))ԏGi$RiZ%7H'iv416/|= -Κ&=\8ϹS~{Q>y1-owEZI)11ܦۓu{Obsw? l8#JWw%- 9 yf 8AןOfd&:;03-c1:R{08c-6<@Ǟx&2c`Q9E eQ{Zk?sZ /; v^OkoDgi,]ez}ԩ}ۡn|}>uoY<۳=+ڳ"LL"}X&EȆ.Ά`TDP|JK#e"T_<)g[Nǐѕ=ѕS C@$N4)9QNTR*!XM`ML{jhPYl0U2b.ßXFӥ+r+L`aTVdddhO=ړ=Q@r~Kl(%,8E6o|F64$RZ/)oPK*qnWE˻]`2D8w<;]g]NRST<{VX6׽s͜ϋ\r^ÆnĐ^^j%.vUk?A-l>ё!UW,?Qɨ@BAI"Z["]t+*)TFc:JֽKШ?c/do!v|9b{{ڼ1TֲE-rd#KJ&)NHdNGxr9&JRgstaWXkGr 8A#p!ʲ`VP|Q"AK &Y"R czt({'珨,6wfz/ [[g:ָnv[nh[>mqs-3UMmk; OIUk{ia^k{wZ/! x:uY$fRMI5@5xZS<o%%P6W11nEdSɦ*Ͳ"٧X ~!dYݨMMU65Umjզ^!F3p;5fk j,/M_4K*E^kEa (MΎLclnbadPdA&u^BJmRk.ek)NZ}umsSe |ϳ|+ |^ٸs`+ele7Z 'ֲQ*H i?3$󒖾GVc~w^e㓊;{z6->ܐzOu_ع,69AOv=[UwO,8r1lyu|;^Yӱϳghz~ٯ3nNs=_K|lϿځW>XAm 9^Qbs }jGj{6r>GO5[u6⬒a53VʝA{yX5s谎4 i9 K?bwx!YKпL8)Wg&@^F5v(7ZNj:n6 O'?\=͏>p>/ǩS&cRgfiέg1qfwF4R7Fus4)̌40L1i-'&|Jb^OYvsz6̪^f.U}wҽ)`ƌ`Lsoo 'ܽk8 `^cn<?وn΢Oz[t~3ؽEvrimAJL[ ` 2qpv*m B/);[v_%}  }Pǝ}B>W˛( ;APd((]ΑNaDPRF)"Fl\}Knl+%+n|dD(ZIQ\JBJDE 2)HRtVA+; Z`4-78}k>ezEDty;, 647 )dTI^` Oyd8#%E{I$;y[xg|j#$[0'cfO\YU~YcP֛W0~r92G+a#zr8uv=4?{&\ߍ+JN'xSr?]-1{@/r/|_osRs붃 Mc5sݴʊݯG L`&_zzSjNSDrE8@ۓkAy^I:9{ 8|te߯sN [Ky0F{_4(nڞݴ2ܰ+Ks>-oZ+:J gRJlNH9m^\4\tDWOd8^b?LH'5b0A ]jVf bQIJNoI=3v>o߿S] W7'huk[iOk%4B54h7s-1o&ja+rۓ?׷#L4D sydzps hvJ*?[lm[^8DL!48MFo](0rA}Mt):wSf UuX06̢oIe%'>xt4; @r_8,s1n]&sji#RvjvhS ŃV[au !M w1mZKcx\[PhIjH@tk' q^:PVi7n](ԙW4x:Qosn8v3uu|7nKĝի׋6P2MoR>c bݭJM)Uu]N-6W$p|i8ϔFQbX|/bY6.0b"œ#@LSJ'cH*eaoMFAVdrx{| @"PB8%ׄ,.vR>r T_+`@]S.YPj%i9cVuVbݰ|$h4prLəXD<^B@H%D+$h䥓VprR [3鉱\}nӾ 15 >G U!Wϙ+w(qx$SMN OJziaIbN:`ue*XT "%y=1#(Kxq)gEVHF(y2Db1Qi ɂ%%Z84YAfN{/4)a3"qfĜh 񺖎Sݎ%>@L,{d ?VĴGW%K ,@nZk{R5J5nLxj;,}aFxl{dIndZ-ܕt5-{>dEUUH隢?V']cEcP<˨@2Wh! QE4m6qy>d6Gז_NFI֞+'4OEdi$0؟I0 " H$<Ԕ؆U҂YvE`=VMv5 \;;K`"K{?ş{b/lxNtWg8A8f"`Iđw,>e*gi`xdSѲé35Ő38~FIڪ:kS|d'4HfO+؏98~N4x6Ǫ(mhx |rMv|c֍~/ Ťxf?XϏ+Z( Z<e 4{f쇵/OWl(%dGVdgR) Z_7aB:3wVYX6q}s%i>Bݺpܝ ]ڭoN~닿DXOiKTuݛC(j(g=L>{sCF0 ֋cyakI2U+0o&l BII[(RI?D#+֟m^WhߟtFJJVHBMɐUStj IQca޺*4QYHjNAiݟTB]͹c3 dNɩtۍR ^ 2ĕngD1*dPrP5oP^"ibH`%X8)޹z~ Vϸԅ+:‘d(R JIE0#:|W>cNgD1v3 gT2=̖Q OzIe1=InB/<#8OZw}jZ5&Y3} -xǼx5稘Q xΥRufD!NX?_.*cJgHoߐK8v-VzFrܓkcX]7/z ̗QHIL5H A`׻ƨ8ZyTlAJ%k <)D00G!g5yi U r:wBc]R˕{O 4yna@Vd־"X3xR!qlx}^`)|62c Xn9w<^x>hA?5wOpe5m7^5'+p.x[Co^"uSr;;=DcL7G߷n=O|K.^o8|m28Z9W<]ZiPm~ײa<?'nKy&X 4|K@~፿pud^ҮD 'ŀTxMjpʎTx9E_fIy{#X)XHhSGF[w'cChJ!!])ZB{>gA%@+[?ZU{e CIXh>e_np)v441VLP~QZi*6{ⴘ?\}~YQ/~f?}pˆ1` M.G2^:4a) J"e*!BB ,% rt:~fv ?!Z~2bBE3uVdbL@B1~|$ K~VJJ*v513_l5) (N #$dxfjQVkLRK):*U32-E{Ye` s󺸭2|HuOR R)%. ;_=խp=u99lRy+IɤЪ1WP NpQW%ͮFgcQwe QsKÿ-II;rof  0-H^F3_Ws\Aƚs7#lz@ B;MmM7U]kWU.V៍MFA[T8Sll-Q+XQ$})1bGT ~s!}1L@8oѳ?kPuԤOC}H@|f?lAzk ~ ʓ-ƒbvҊTFx%ΐЂܶ[֧|⹭*y+[0b f+z"ZplD"ia& Z I6D&8BLJQ.9n#r=6^ad lA "3B  8aF, -KFH^UAsě/AŃz ^dNk׹AEZc{bQ#+I)P/YcJ\(yQj-JWᰐ u^ v 7A=b::&?BxmrZ*մpH/ OhJ(!7HjbCb ͑Z!KxPgXb BD[,h΂p$ NhX*}V>bVIߟr f#`L3X1ؚ`_aI J%AalCմ]Lg6U ̋?8v6)݆^|D[UK++2B^ٚ=t,Rp!VΓ%ԚHLUd2Ҏ<=PרcuNN(]u:NEǪݝqAɖJ1OK]YVa-jcmxK:Ra=E}*#gk2vڬگ^SFs#=)g8+G׹@Lh4Y㢦w( h yY<ѤonϦ>w~ lU^iJk~xm3BDSr8* b3!Q1Q0C- :J L4uEؒd$ zae29v/h<@'^qY/[,F'1Zi,H A?cJn &Cb%@s[Ih@(}7 Bt*XAz0sK[Ăs E,7BZ$3)&q):!#Zlcmpm?+a",ʷ(=A =BOz:pLjtbޑwqSnؖpq?y7Q t|~X` vb~b7C`1(V7v?Eh|f 7@pWLf=3~zzG:ڀD Zcؙv]k]{9ke1 9)kevDƮ*/AZ)+`jfLnj^͔2߱R@uS)|v <ඝ aNs TNZ܇;Gg sE;m]^JqN@ʆ##7[R&1?"4gD3raCĂQ(}ߚzYz? n 8Bor_ VrtI d>W[~S c/v+:BF[MƧnR-Wd\%Du[pguH3~ ϑ]{sl-Bwɋkj>ݱT(~尬@3$j>e3%|n.HKw%| T*%5Q俌xaa.CN /W+,"BW.џ_=s V<^ɨDrI;ZCC3?ĆΌ#㧕XLs/3z|.Tfm-ܹdKNQynj7xVY#R>v^v`ގCM )@k_Ϟ.i2wW km1eHcf", ;&tZP]lxע2FowVԺK.D2Rw a*=QV &qIjNmv^3_(T{#V7 w"*umV`l5k̖E}ly)%G(9 Àm5BjTG,bWY0͜&-(Ma) :M?nvǵ[=lؐ< 416Nb 5TR)ExB4"!W%&|Ȑ@_"QV.8Cv7`ȹҧl@#0Qz;ŕ'_)!CkmcIKA{u gw_@E@TIi8=5mòtWUU=wCN0rvuCN r;șЅUpUroBq=FRR9-x D4e]0+a;5ݣ+#m-ɕhOT.8iET1J״3SjҚ (i#TQ 7%:XTx^*)Q/UL69%D*){;hA8%sD|G&FJIwڑ$SݴT @g,A[ }TdN%jX5 Ѷ-V=LNk2y_ܟe_׬4dꉺ !.YzHؤRHrW54޿\i9˩}u-&Bu)O˝~PFwM hRd±6?5G|Hξ+ReGz/1ɽg\Rz~ : 4nGĕ2M4&䷋Y~;YGM1Ԇ`J\s?[*ە>؇wk}yi Xɢi4ϳhD+b [Ċ3`M@F+|o'Y9D.xF ^@'ˀX`u(s*ڰj~<1jUK1wF@sɷ _1ĦMf㸔W؄&;8U0="# >[$M9eT -%Q!D"[j+M/85Rꂤ7 \~Ϗh˟ʊĂwmlngq.&6 ཿ*V?O,B`~= WoOEh}za'PC 1GP4)f$hJAV[W:f#N<\,kC B6nKD_r\]0TXTݿYԲɌyD-[N?Ҕ 6$iF d5kY-SizVZv}A)M4֚JtAS"%'b$ᔌqwrelA{ E$%m": E@uFقS[&zG;Q?~~~ڍz4Mzp&OHE[UtۣKׯ=Xs|Qd{)Ǔ $ubPHbBbo%|ҷu kj Cb=WRlۚU2U4DH.x ?(cs1sTVKa?B3DVԬ,Dʁ+_|0T좭zV' c_1]I`岢j1ޛ\V Ʉ/ xJzPrR/~otbk:9Sȕ_Mf& >8{ۧUL5j3-7-:{½Ņ]]p>NHk%kݝ#8j.¡`}:n lC2^ծT(sb|Z>MΥո#0SIJ6NEv/?4V &DEI]*Ϫbl^/r͟mG=) y{npswŔ+fYl87+RHHB~Slji͠NmW{ltW- (Uz#"Rucӯ8:v1[h!Tܼ6:XthelSoJ"1)C. Rq=4Jݞ".Z9~Fx@ڛM.N)jFwA} U6F Tg?Uo? Wpuû$Ip:5~; ~wz㭗סNPzj؂H>"YSƻv|m諣.(=o}eXVZgP3Jlرx%-ߧ%"2=jmf6(?2ױu,(s \W ʬހfCFsVf2Is^p`T/EAPBiXО[Ы+|.jճWE* t5\73կU䱯Os n3yZj.vtqz@+I)0g[bhZYhF~MQO_9;mхl=μ)qJx7^Q/C[taiAsȕ#JyY"7ʀuȜPxF 1CNn"HqE0\^ 7@{%݂Kݎ1yItˀl$UgHP :~R͠P{٣Q_ugDAA=\x1dP,Dw,O):[RH"HI'뭣ܢ\ZELÅ7UH!4GFJDF>~ÑH`W]qIO Q\2˯{֊{&4j(yOmAY%!9aPD:'Ah3Yi<8#=FrgpÕ=Bp7yz"ɿ3% 69,ϩϤeE,fZH)9#,8 L9<8 J8lD^4eWdk*Ǜq^c)ͶřbQ:?yD3ۥBb& hR&vux]hF-"h/6 $8]r$guS&Ziu% +=^)l1:]j [U' p604%qH &Vq%L`'Kpׁ:Ip`/)$x"IcLH¶eN wp7D2%uR]^m2 4sycI?,5߄%Jt74YmhFI[ *lÝ!"K+EbRDi?&+ln-ary Rz0 Q'YmNg뢣qS8Te vP(BE REƌo`iQgHQ N'g tNW (Ik$jP2b帅KFqEpjU33jк$DMt@Lނzַӧ#f8}עant;߮jrM p[:)qc5)@xG)'R?:%Iw碜lSN$ )JtĪו듼uwVS6,4Fth´ԊI2 Qq>bgb*˧G_dULG9Tͭd.}y61>G%%oɴགྷ31(87Br?$#Sbu@AkD9˹6@$X_޵#"%8mExe& , ;5`7ىw|Q,;3(qKjY$GF0XTwbw/RRrv8_|,PǨ9fN4 [=ƹCDv]ѯO޽z= Fe~Қ<<{x{H=@6>pOY%  fAљYOFOdz8!F=_/ӄvĞZo?XOyt*$2aߓy._;.\P`cؽNh]>0Twuv^a9N6mǨ=ME5 EZTjXnw3JtdԁMHVVT%:j9s;4iv( N;=V*!Z}{b:5;ߧN*Nz&L>4 7vl y@#ch`CɄ4yŠ,6ιw )o7'6f _uR0*{਷"c=˵3U}LueY%BOњOњT&Ib(19Zk(s(!8 Ў=e-ylU"yZ]_ԩuחWC[Ɂs>25ҁy!ϵ_UPܕh . QNfK\֕Y,VP& ln=%&m^\#3(Hl%->(p( cF|n3˝<=_$v*wEhzl0J ' T\>Jk&}=UH*V~DFCJwٵ{vD d' }:8 A%Z16S )~tGS4,R){6r-怚#<4Ifp6]|psCϊu)BngO.}(-m6ȝѦRW2/󌃔,%30c:\!n]myҳ*  YB֙PDF Ua 4֏7XyxHBm艩v2>?5{9:nlB|x-T'+s{GUZ<[ٕMu"Wp0 +u zXtm{P664V녒3 E*W<@skV%*pBgkI|cb3s jәSfr4YClfDf,X[k٥*Jykn! P趈?xRdsBJu ۹8ކV4(Gmg'u 4]bǃ6N !Hv+`V g$ꁷ#?C` ~-7%.g{aɎYqWƋPhNP(pA%O7KFW [Teu󳻗 OŘlOi!78q h furNŦôRQrpZj9 5e"rJB?jx_Y20lj'-샫H1Ez7M>DzXp;e"v1l -L=/ҐA5FTalJY^'&"`沷V,6=!@Nn=0~ߓ=XO~5YJw{-= K[3PyƼte6g"Ӫ,Raoîڱ ZClAYfv5{lP)7[q 2]c><λ,tGi?o I}W:d(Ch$D r ~gE:²qβE" B7;my&vyc)TPb[iqxqaߕ, ~ޢTf#pBh;ǓͲ R9V<@ϡDVIP9}dr $l{' h<%5:hZ%G;@nQc96^'V5ViY?iv{i̬'N(.K"R.}-`0Hd6Х~XaХ$jYA)тR0f roK <e +*F׹.xk%+蒑 k@1EmIh[뀚ZQ ZJ:aJ# 0FA>8^zgBF>/ }ʱ tí_pSI9k{(Bѯ ^[`(sy:FKDw#6;Ϯ,?F+U9OI@Kd́.{cyur+zTn 8)+30W˵)HoH&r'iAb hRҨҁ.XIkJo4&"d[&S,&@$C[*"AT&A,v%AH0&B>%/v9VY2Vp|(-Y7Y{9h+;'i2,󳋗5}ɊoN/b/q#>(]BBe&.2ڒD&9籘6z#t\Y$vۇ_ұxpqbMV!r97OmwIL^,UInE} ӮoVZ [Q<sօ "qqOň)_5:!˄h*wW9&Ppa vn }U{- e|mO"+djvמbwYjVx).L{ͮ{/lWrV&SPLBdC̄ 7鲕'?rEc_{~јĨ-xK,l<9/CU $~ս0K01b0zlQz]f]ۓ{3 k[FH;'ˠPiOf@7HjdrN|S^T ‹ǹ<84TI%` iON}kc6B'o! {4 :h6[G}m,KYkKӔFYm'2ϭ1o|aSr#:Y5hX%jT|ZOIa6,m` <A`,r4 1@rRF ^ g9-rڡ0|i .:\ZuxTb dZ^;2V:'mOl8VzQl}MR̪tNGnHp& RX PD8y͝WHKv#cr{]H4P37dG4Q4]9E﷚o&|R ژ$@@XPmGL*9a+}͘N(bd)0En=Y Ni t0a 봊s! IꔀB$@`:Xہ&fepIФ2b=4:.ȏ J@c+mFy dHQ-YۀR򀓥 θc4,fk^py(r 㲽:qg窬H"6I6H5LSAnJ 5$ Ky}ӊv-.SӇ|M ID-XIϏ?:T2^SOOPq[*R5nf_d9p/g[e~qI/}~JJI5h}|R}mVOl_ lV;UCKdW%ߏhkzԻGG nԻ~ wuIǣ_rZ=Dю1/, `i󄯌n?҉n-9rpkYCj Iҡӥ +t!3!HVu88 nJ4dnsJ7$= 2F046tf~A晀Rk%Br.cKxgW3)u!LJɑ!k鷟 q~՚tM' z8rAmysL3*3i77Zp ߧbR>ӆwhoW.Ihoy!Cn<@N$eһ~k MҢW}ԡ 㱰zo´@% 2 m/G{{(/c=}hQ r48G[}='c|iQ=6imz4l&}3:w#奵6($fE:L 1)3&"{4QOOu{;QRb;?$7pT{lw`}La:ކ}N}E)8Q`w۠ Mw_^oh:X[ |TK2=Z]-85b4zȉXu;C*===1ɈN1hڴ׹Rd9_W, EoSX&/?+%[t}/j;!;lŒ1f lEk,\;GdBjvZ Go_ѵ srFYeY)z֣k55|wJ_}fLqaI,`yAcs̸4?De(*].可F߬q}_esαhѳԙ&6m(htO9r Aht䛇H@rףUa-`. /i~>cn~oz1͐;nMY}rS*"Rnޕd2I6yи}EvzNc׉}v`]4?Ȭ]b~nzMݺq%=ao' u Y#R.1\/um\'1}o=VeB U#r̵o.0 m"r kI:ؼ-pC5F*=4 }9cat"l_ y8N8Njp<pa"T9I- vZ;Iat^.Ӊ95)c5CYܿ)H2jRF uk i0kV>s4 ~0X})d/͇o?WRƕlKDEr*'HvY#'6ł @W Rz;rLH1E{gDs9AQbYܥEZ=OYUta|cl~wX(Y.k~E܏y|8S|8UC昅3wJ?$c"iXX/ScܔemZm\si]+2BK 4/C(_t4:,j*]#aXѶ5@pERq2zHj3 8|_IԔwuU?ōQ'~>p;f$!=$g7 Jq|ds?/g~?{u9`w]hBbDQ;f伯=?'yќre9)?DXr hqOSC ۉZ<*ͮzxvs[_T%Ǩ4ۍ)}o3kߐYqzMmG\&Tuj*`m{`?(+7b5N\5N\j W57D3sKP3cFcw!lRfdQHAr#>XV I R.''eq_U:%u[L2ب=QJHőDG R!ϑR[>.}~L."Da*Wuy]~wr"BQ-v"c)MctL uThtχD:nY- •B1‚9$Vx^q@ $D4FIE_AFy-Njjͽ3Q-A xp="xD!Xm'uyZ H䶤=qݟ@p.PNӧ&yq?m` (!˻KO Yrtk'oGQ RT??lAϷ8o3h3f:~2^?>IM1En22v6,DXm2<䔋06vsp5y ; ]G\"4)[FiRVR0da_ Qą4@Pi oKf1m ' j4T8ڡ( U[N=c`ƌDSΜNUZ~"j)[0lQŢ6Ergw7n\C>qf7yZק Br)VJ@Fn-?zRrMg)Lgtug (q&=g̑׭FGU9+W.]zf?yoa6lx}u M/VR | s;+f/L@ʧ!ȘUWx1J;ŲB-$}w EͫHH2 [ V"2I{ʦ;4xw a [`b4B]R;EFr5 G)a0Hz9gQ> K6LS) "LjppMCH(ΑƭH20j#|/Ti@]+ES{FrP7b55uAk"փPCJ~ NjI%uRe%R+j~<%*1PEVG%Tv) a 讕|"i s$HWx7#!/nh ݶSn!wf{_TY x,~4];x!5U%=>+WN?)ݦ5N7EIsOVʼn%,vH%ݫM&l3N~|7!XDd[?OXH6sFא'ӟU`WJwlZv7ZN\tA5Ƣ[*R) kE'|;8= W/.>JcIaZ#I515nF#4VBq)VyS_,oLޅU%Iν7vbG &pTu]O d=}S')^OS?D=S1?7?yj(OjjQD+ӿMbh3(](}%^_A <;Rm{-+_6%w>6+ܓ!Tȟ7 ٌ`XFJѮ{:_#N{ß?@wٍ>"~y2.!X˭/Tsv7Kpn_VMĕfZ<k'ߋ7^#Ah +X׵&%1[Xo_tk-$8b0FK w@gY\XDqCiڻ eDh:N ªRMO:k%aE$>Ɛ\F0xb A)qD9$vz\wZAɵ ۗӶCZA ! (`'A`dNg u3PJ05×RZإro0^X D1hHYJRD#EHujT`(wGR>GN”ٕ+ S\uиm{~r,uZ~ɝQ9}ꈦ ռhq }5w `y5-L@`LÒ|W}VwD5qG\Hy q}k< LjAU_/sϓ)'3&y4BR"}gH.+Lmdڕus )%owGO.ΛK'eridv_ #x!t˦3|"TkU0EHa $j&tr=ΫgFOٻ޶W(ޞcJ{ȇ A.-&A.YRDىY([2)ԥe$E.ggwvf/3=m++H(6K… Gi#$]frpXp"KB %5{T@ XC: 0KVCk ~|S%2#IhS9eL}P=ad 䵢M=GCkU``4 *h .E+ iΕ k-VEB\Dc))>q1env2+gׂi+z ๣[8UƉG<})Ċ8* '9X*wj vs8:ckDF^h1pZH*O\x傯`ݥ\ܐ<]&ñ btHF[9v2yr4E3.S̓Ω?4dNhch|Y#6uvjrQZňtbN8B؅$G+?ډg@bzEˁ+Ms$q$ hO(KDTw $y=zqѠ^⒋t!xY!en0SߓɮL^CqNj $ O~a/ON.=]Ǔ`dA`A>:|''ǯG)~O -ٿO翽N|w `_]O.|wI':9OwC6Sة6":٩,fucuܑ<d26{ٷ1ƚIܿh߾? A~<~>>jzҿziq]6ir߿;#M: >onqJ|:?\&WN˓6|.o' 0D^uJ%ki޾Nw/OzV ݗT$bL,OgqTl?JPOI=FB\'A,Y+=aZAsJIM#]xկ`2IɃ".yGE>ğ1|̣hfP~8J(?-" E>`~~8zO-#3̯̑ʇ:3tHG}FQXÇmvW i4Υoj2aVkȯ(8܍^e!}0˯Vn}L5rةfA-Zv̠=s#nMW2oc&4b1&+wx3(sdG%9zp\lW{NR|>S?$ J(Mjű;~2KrӟL{9.$vA}߄^`c>@6VJ,Bmv+zoJjĞvl gp1u\{5zaœj$#(D?v쾳p-@zj'cpx_bu;חa ;i/a*9<+--bM:&V23\R3.}iROd'}Z+ԸV_h%['0{c e G#mYν,n:Swo_jD5MQЭ[vԵw?FجF.L{9 m].lNĭ -N[<"ΦKi-^C\@P,Ӷs"h|8u{C; v`_Σ&vܯna5DHkOh nz˽%BJX$Wt)bs- $"2Mn_hTnFF+fGxl(eŢhe-QZe IӚ4P㫊M€ N6 ջn$m3.T.( ?5C;tIgvϢ~鋳?r;t uv@Ut;o}nR%Gcso`YjH[BV9 IuP: EDa9# 4?fMQ ^۵Gl.;R7+{r>ܐ/B G* Xad8b $0STQkJ.Y9c $[;zcF7v&uѲH-r8:S5;, _ڢP,#$B4n}.> U,ńtSGkL 1Rd=Kd-:GDRHC%Q\kXGF( &a#誆YeI5GXӃG0 @,KBbldqHq` [1S2Ѓ+&LF.ѡp>RrYĘaA;uj,/Jxbgq%GTn8#gn8ߑrȂ1 EoSLuOӣNԟӌ'Gn x?3,`gFZoC D wxfOwcIF=_ e j *L*fBvK˥Bi+e _'8:n>hI3 L m?CiYl5ApHIR+l \٫_v&%ZoF*Gx\j!P0`<*P:T0KaLKSn}VhikHMHS8X&T\j5۪9Q#:ߤdp ^aԞWWn^wwk؋+{5P#o-qܿkr$99BWE턖kл"$Ѵ ]KY"NE6&yV(gKXTz?zlQ\T6>L9KYq$xaXmKW{TIX*lMUån}r5-5Dͯ&V]/Eض츊"bdz_\eSIj!pR8 F]z_J«6*#X-vJs5^ 6>Ez /P{B A(#m{d.)SDX0\|hJQ GA!aUy/8Q5#4*l4nr\[,@ ak$. jf;&.uaS趭,2t4*eOR_Ϧ 'L6%혍q uQnua 5jvVBpjf60ݫ])D[1')]bi/Bpw(|jxgQ])D Eq]Y VQEr`?u/N5H퀫ֿ܀+n\v\ $Th}U)GLX- {Q>mh6I"%L֌zT""CH;R:]3S8R3TokHq%NU@Zo N&DLnh"]V.."%}("PL9X kuH a&LV 73^{s&Bԧ|K#KKH ,lWJ ǂda3%X}X8;k,іaf;Ҥe;Rӌ%ä=t@@ް?v"SQwr)m ւ+Ai,IkURO]xAUCZu5E}١ؘs6Z!kZ`doP^ ;eQp^%R rݢqeҔ|TO C"@KZk[o; s[v#A)C"#)5{/a Vsb2 +y=& 6 NdjIRu `Z𽙤'a#[sbkH{lBë"ʚ]'uBo!1f4Kcj q#s}j{Khǜ9!;[P F|nPi1z+y Г`ˇ-, B!+7!Nr)r':dNJ(ޮA]aE8ےjJF#ƭ`K RxIdCm KHEm?[7>j:4gt*2ӎ:Y"lJx%z؊wMA^ζ{c u ״A7_Dp_C2ш;}Pډ 8:@:_ ~sq%Zc_D4 % !?|?#Y/{ _>bgטNCRRÞ^;c͙U"#zWq1x3V}j%X EU-CɼAťxǩ3j۾Yf,e6{*uJe]RYWTUTV{} &RFH5sBw{DAXL !i (*"b+}QsDwuE9l{NgGSuŰ&kWv$Of7f?Nk 'Nm^H`RS{/i?APvF0o~|K%#Y_ϋfFw ;~|T4g7a__ΕkL xxKofвE%f|佇=+ٴ{Td}QA2ʇ3_RIҟfb|3V1NS,Pj1/fyNO,IQXB]5rdzY̱Ppha[kAImāip⩌3R5c6?g" ].r?=*OfI,| n%fңCW/oe KijqijOeTzİ ewO3oga ̶Ïdȓ.A9Ȅ |t7 ܬYXxBQ fG<285878F[|r &N>jO''A~T+Y;6a-5+{7 5%VSp,s=l@яIJL`ͧL70ɧvjTm>NzVw*@ 9T.txtz@#DFgVBbgAW?cR)S0 _,:/2@V3`&)fvo+Tgl X,uS.t(VOfz~=Kye| f@OGNn a-iiU Y%CK{/`~,=:;va8\39f[cG̋`Oe#| Vscj!%U ~0lp_jQG/d0sQP%c-msRnXXT9RXEcK핎XQ. `,ȔCKBvzS֯{wW}OM[QJz":+ ?]})4!o^>̪GJτ~߀kn.O5E9K"n%_S<~|'L['IL6ĽJpn>b9 #!p68n{(gb D<'5cL9,L:%DSc@EGJc`pQ!y)Dx1$Ĕf"ЅGZ2lч2['O{9xJSeC=vk%bY| ՄRҜ~>48qUn+0m|><QV|!kd/lY*A!""P ^"cUɍ!\P ,VIG%-@D`1u HJUɟ$JY3Hwi)M4Ea9>jT-gHM Gg ֢}eMa'iϰbGRQB)G&J>|C%LI*AO[qFZPe< u<[`X#"I\^+co\_>N8f,)ҵ?hR]LPt;^Wf-|X8б.C1J09LiP͕y6!2)N&T#'XG TV8)+Išv8I)N%|9ncԑrRsN?a =Q:#kbj #eE7DPiFbLɃؐ@m'A4AMr d'7b{v$9>YF]} pp se%;Rq^C+q!|zZM!SjaWX$^7}ˁ)Oi_q}9,N~2$7Z^kќ1Z67SgBI,z!,o/ юw9Ϩ[?6\QuƳ]j(1:Y ~avB&~n< 8^%Ǡ^qjQ;q̐ Ų}cyIK;kpᄆ&$Դ֭;jo1Tȇ#q Υ{4r.S4֠u˸#!Lpˆ uTUV4[6Ev|GI(9CL"X3S`}Ņv&5@SJOhLLVEץ4ܕ-ٳbqjWJx1lŨt7_k&9ݣX9Ϩyß 'X|C>93F;ޱ<5B( r%ɬAd*h;d<^ kx)T^8?ǩZ/C0s.~lsw7XQM@~K> ~_}FP/% Z:#Ӽh>\|vp@/9 E,^Xb-8E¤N #Ym%5rf@%e}zBȚrj,9qx?YK+*XZ (7L/+^#+UQGuWk@S:TcNzҠy-b&3a=%KАy]ιGQ+?Za.~Y;r^\ I'iXUB3MV=LwqBc㪧L-OU3e t+%%C@3b蝹_ *y730z(!"Ry*ͬF $:  l`&ҩ#ln ӤUTD;SARF7 $_ʥw VǤQ>n rp LԀ/*``)a8%:QGyIѬlLwkQ`umRD[' A|iLÈDu,0%A$ I #eR9! lg]h4g0*"Ef| _@>YfUUWmڃbHV36b*@P0DfqFXQ #FfiϢ6Dr[ɽ75']6ۅ|Wmޚ4t鋳ȋ~I2Yh8"9V;=yk08X\|gd]mo6+|;d, 38PcmIp({lR7"ҭ] $jtL׼8v Ӣ$ OcTu9Ruմ#0]4sS~*Ӥ~^- t}9

Ztlz1S(ޘj>:! M( qCPi PH˻oB.$DӯgD{==w_ 7cWtպ^_}\80jKtu'oҵy'Q#}JEtG%Kdi\ZHjzNڪTT!wTmqDA te`9P&.tiuI%avQ#}JEF=JFּv)<(%L<ͥJp߃Nd՗(N vXoЗov1}A^]O,ɬR3 𼑲75UbxM#.Kר@_]6X!9(.*s5y~L1H@:^F|M?nj#ѿA׊.yqkWZz1z:WMO&WM,F5˺M0%W* 9YѣnɘDD,WM)"F c{JNmg+j?\%N/U4=_~(oo~ͪy*黿(>Đv}|j#*ܛ >r7S< xţjVcFF(Pe.$iZ=R'S5\ClG:T%UcϪ'?mb>\*hlL RVDbT )B%.Cn5׃SB߫V+&ΨbƸ.d)gPX'o7ުp  1P-ת $A G[]mǣ8fy\7 CoYK~(zʩQ`{%Z Vg\$i,AB)!ff1FB:y&PصuD)%O "k: &<Ԉm: ̒]~<7w?o?=r]hZ CZb{`%~ՈYKX׃ 4_$yXǷ38wu`XgEba)OlA1WkKcČ6/e0K ab T(&epvYyI۝PF$/1׽Qtb tfe>:]jmh#hn[,A 43r$\b̢VUsEC|BK fEcﮯܟTq& V ,& }, VAI&C%uvIq+bƈWpF A%ܙɴRɅptY@S+Jmڵ˙IH6\f*eI;vpJj~Q)sUhE^ .c!,Jif9&PHL3I2%2/~E;+>JlŎ3[Ǎ݂91c1Y &M2spFBhWQNy{jwaz T%TBqWrĞs7yOoynjOd.}|02=p|BC'7ΆJpίVLN>|>DTKNwC!.#Y|f5+6w"k w9}}5ZO OЬz+!H=3黔-{SB`\# SxDN| jr? le8Wn7}|c> J`-Gf{ή(YhIB r^9+>JLg$if$-Cij foҭ5FcuC(|1N|S)vjgHL:쩒0$f{1սmEGTv`fx54bnj^yTv˜eO 0Ӛn!>zcNg%݀rɍU%Uq&K7~adn;J) TT\m60ьdT _0})'GdDnStj>TjO2l.qcs>0gF*ֹҹɔ$u.{B3np:gZ$0NQ9^lDi@,& 2NwQ&cD[zwޕNʷiR"sx84U,|+%{wadܗ#͋5^MO//FX*!pˋw̖ 0-^4Q#>S>;!vJьlViCbu.;4cs&ZY /u1%c0 >7cG29X+ sNr0'+j:I&%Rr.S5"oH9C6nqcB_sa :6PW@ӗ~p@d_OkD,YZnC_iR;ȫ0+4ZH|hR ? R ZوV>~#E\\Jn^HX<| I͸m Nߢhxs<6QJ6N@P>D>A.X-'"B N\9DXs81tR^>K{/dT\I'/ ThxX74IZDgɟm_HT40B6ʂ[he*in1eKfPR쯦h6<^Kpl o0̇w̒1^sÉJi1DR(G<ŕ ) M( 㧯7dpߎNjNȦw#`r]q!jr;` d"%bL$Ca<} f4DSHv.g9~!1)Ζ 4Py`b6pCB=o1yN``_MvH?>} lma$)5$CT $Νs gd(\t,RI %D~x^idX}o/u$~uԖKdzMo bNGW uV t^]Ws0А,v}L]EZXäǟ`zaş9/"Ix!HQbeIkUqmDOfQY FFOK{hu`'(Wq.TG&{y~qb9ˆQB\֢#$ WBrС@TR?^!Dnx%7\W32hD2L*PV[UI;^zf^+3H0"$P$Sa '26qB$(EsREI+~v#SZs?T>.C$o|GY߾Q2-#z9v}47Vӓ>:\<$>yOV<*LqD*ʩ\{|rٺ^3S'; MDpi.>g4iVYHmPFzXmN]HJ$of"OG9X%, &3<zrN42VfTiu!4LC% MY [Q wo,xA0 NTP\!.`S Jx&sQ4z.!+w}Ëw̗ 3gs.Y-'qZkMv.EIfz;LP,u ݱAMzdQA|r(%#P~oU>d/*>ny bIq63}r ZP[K:g?!2K3ExKcHj̝~p?[d %Y*eI4ޓ/pXhܥ2;q畹s痦3H(mv׬$iPK6??'1Ƀb̂B:8tގR%sC<:Mؠlk_ˇnӳrLV`߉R*Q_ˇvyF{m)ӿ|wfyT UcQN. ;~]WN/d-j;IhnR*)/("-,pgs.!纔X@%4ʽ$G 8 k/f4Fc/r;UZ{80$G|'L_&gv&Яm+?5LF$p!10!؜@CDH~{+%_ 29 %"qr۩C!0Q$EiW d-3+ A vK@:P0ȈC\Dm@Xũ{JyWSS9As,%Ε(3h S DU^)x4K҆xiԶA 4㧁8`j74Rϻ [G?2w7]dH_AãΏ?0O8rӅOA?8)g`Ɠo6;s{Df VH|ownPWqEG;3?e57C FMC}7ҴB>{5)X (D`3,7xmPZ; } a_&ƨ|TR)% )͇`%ADLtq3h G=7̗"%xéjU|uMe*V܇D9:R2k-CP0ܠ,;ϩ)TdD+q*% OfRT+&b.yQ\Jn='|^Ket^KeyB@Y_&w`xsܤY+nZ@)8~Z .iy\Hq"EhhU$V- ~ s}/R}/yeր'iZ7>6=2;Ya3z2 -ZUTI>pH H}HbUìE$#% BJ 0o,1P >5HaQV j9XYXXÍD5N7.S2P0960(IgEbV^Ӡb9ւe"9HR "S aq `%TWC#uYV[*(C |u g<|2`U"U"bMA6c>("iP0( j,`GLb 6).3R[5 F.gdAoLn_-8 6< )gZ 215)70Cby=~Ah^[.#?q;TX{Rjt*vMjYompK˹XeYox)CIs?lGC[^),jV<&TGIJi+kwڥDݕI >=$f!88833C8o: XyUK8pn  cS}1oxd <]T[cPnk[g:"K(juتm}A^J=1][G;ɵBv GUƒ'5dIsl 9Oce}O}>-Ѳ28-M!1 GNs^zhmZ]<p7L#E7# ŒKYJ"URHrER.ȡbmʁIM%ÆD J" ,LZ`.eDļiHU^2I0%D#I)\bp-6{AP>5fNt)i7])6AzX@i Zyk)F !4#&ȸ. wG^Ę9b%ԮErU5AZxih6_FM aMd.H`'Yjj%)DE@+dDd# %GX0%X#HWNWA |Vuu[uC:0KN͓ wO3GP'вʞŚ-k7XQeJxm?^6wwm3J. 3b sNND )F,bD.U6ᢾFi[hC3?`sč `Dv"Q‘р]Z%VUH4X"&o3&#p"A'`VSeN\*^#ۃDM~P)# .S~XTJ*7u TÇl^G;8jx"}ccT0z@C &?/ ^Sj<<(naҭ2WJRzwnVV쇄:c?}Sl<-tRD~{_៾_!uh_"dMΎpْ5Όd1$k]U ue)BY0E,O:6oEI L%{3KA=:o@$QK@/>nn }  (=7cc(jTSA4" Ol;@PݱTJj#X"{9Uޥed\cWVi'_ܿ-Y@QW 8hEfg͵r9^.#)qqu{jyYbX[*xG8M8ָfWz%{vm BӾ(&TiKQjw9T(C[B1pa"ESI- tZ;$VXm3.f\PǜJYy\|yL뗠~2ru_g巫rcx"pqE05S8߇)(RuiL x?(v9ʳѭx#ڭ# W%_mq8bM꫕IzSڮڣ#PMPUs- + J:i\0[ͻid$F(#ǚÐMFΔ1~Gk*ß 8w6g9Yt̊ ge\se!mM63WcLP%PՋk5Ux-]̪\{P.vZ5-7k{eB`LK6}j0C tk_qRsq5Yn/+LBfKW>l"T 3AGSx <<3MO4HJ{]&/^)^w]M^^< 4j=8F찣i_0x)Fn+,鷜om5QBZ~`\4LŚ/W8;.{e[{lA}.x0cwZq`uA&C6W'_vm0Ss=5Ye'ۋ:[ 0i4J-5z=DkʞsC4I~`\=LvͶ8\H,ݒ*J4m//8K>jl٦A`W(.WLvj+ysKQ5@4E0c%YUW}+KGn iQs_}Ŝ J`rNJ9)g!N:8DE~2?Eeoe+;\/K$ +=Ig,}YepSYeIbł ~„V2J-#R7<;rS_ BH |1zּ{/嬤4MlLbNH\s!8JfÚN:QMDA9 MK*ۡS)fBJF=_҆.9K|5|2:sY-K-Js< @J**kLBh|Mf&597_(8ţcuN['nN:'P$O5'[|Ή |.)ko631@i{C uN,KI6/qUqP[3b`\ 鄺rHlYg٫LGvfW<.\ &"6^0ɲW&a>;7Xfk';s8?((_X[_}}6AF>̗hg:Q.)ƌi@!4Qkb0D4\c*ǵK;L  q/+^ewR WsILyK BXk텀OH-J׌/7 1dE/=`9O~M0cQE|0PH \?)6u0u^,z&xU͹XuEAuJOjbCU2K`o),g f]<.E aTgg>ƙi̷ 3,ff$r=YcǗQ)@}J5%Sj6"PefclSn60-7R|kJ??k*&l6x4#96w; zat>5`XncF3.t%HcwACh*! 2Hhy OX͐]؛Pg˩y^j|v"'!@Qg /ULc[$miŶf2=cJIʱtٶ$"Iet>T빍?uR.އB@vg/d2Hp5oX߬` 4|H)]Y odx-suxxm,xf?z/2-Hc%=,Q׌Fz#U'W67B[nNf4FCKKtuc6b՚Ȅ1D%݌?Wx*]&S qЙ8rQ{&0(bhAyPMZa[jm* 2L雙tLLBhuˬtk?n ;[K[s@G 5εu<߆ϭ+ 0Cg5Q?Zk?6q㣫ww?Dv)>OSba>O\'gŸ0Q)}"A:&`@r3k_x%VBN/.32od\$ٻR?Z?zOzOKs];oՏe+|fbj]LC' G֗dr8dzڹt&Q\Y)HX1Ӕ@lj>,N'CdRA *-%cЙ2: <8RF`Dјxm.9ո}{TKUEU0N0}po,;jM:Zw6gQp ӑ'Nk7d?oyI1EHi;YcS*( o:: sC7穁 ;` C7A~. 'ϧ0"g'ILzHzt%JP ?5[L0-Q =/Q\U~ӧRMߧ3 pnios{RT{A1=e=>s&Q& 26M w 'S걹 &U6J>a|z*$++H}ءw̞zߙ=;gg,1{aPAmO#~QN](H=Gװ-uMxBj.]SRjQScg͞"m njG~o)#=TwݞO12MURrGѳ{/tdr5V" 78ŧ:>*:`AT(tLol.#6jFHYޣtz,H:c<`U`kؽ+UxVVJJ+[$!-\`Jh՜+CZ7d<*%sc}MYiLG)Fk6d;MKQJ@uPΘZɥ|s*_őNI{j~,i8 MT/<6/|T{3:4t8b%L\yl\Fņ>b$&{$'":sJU6jڵfJqltIhf6TJ@a[9*\F~.*{Jy(P5JQ$ϕabeܷ2a 2-d BwK4]l]@aO7ͻ>ԟ [vWg(\gs_kV[O xJ Y~# 6Y,lIH7]۴)C"BsF7yn/ƜI7A>Y&L`0a WXB )7^_:kRP$ՌI &ʙO:wkԾH&ARWVf 9yΝ_{>MBy]Aos#!`=-(gQ؇$q:CU"}/>KB+"C-M`$5O@nbeL߁.HK-5+cdų 1ᡃ+uG󗐋A5B~y\xјBuQ[n<QyMAIb]1=9K1?D4 *w{.>W+lRrc'`Y=1~MWr8?0Zk5 0%,gHeS CuV2; oad.3v]p͉uZVVHn;V| \ަ H40ca }z ێ\M\I&&㇬U(tRgq9]_!ZXK{nTAgAyAtS/od2FuZIi$a0\JlCmژ C/RʗzdAV__ug7*9_'(ko`D|~P8 6y .=xYQD1JkѦG/+S Ub'={Jk;qn`RU'i@6t^Uߩ;ϑ0"zsǬʁ{7[ N߃Y`ޒݏN [ޙl%tǸ&k ٬09JAʋ~_Z[Q#F~Elkx6vd&Z,V%W*_$U+ۈԼa}:CJſ.mb> 5D> In)XF[ی2e\d4 FmDq:R/V"w0Fzgc nʖnƢ+7"{+Q r`"jmzg:E,Js[ze Vb"n>Wo r+ 1([ Zl-tЅIf;2[QR0tJS|Xcȃ{ fG ~iiȿ;)II!yw(~};лw'Mļfв`E:8KtxR9Y\Tz]?}]׾Rnu~USڝթJ2IOr]La}`5Ap Oϼgo ~oʧم:+8+e@y8?gNΌ`Uo%A:pi4j5M!DɇۯĘ JBO¯ZquLl&hm=wU6~PD:!9G9,a:CJ`0KP!^af 49˗萤@'b@$U'Q.J'aSC5"T|ܕwvQ 2#) A,tGXVz#\f1h17@E;'[8b!7&V*N|/aeTYwYH9"-P (H'͹ Ǜ87!.Q関@ߠ]a+-7zpa Qq^ۺ[ x"Q=Hx$d'Wh,W7re{`yN[Nk<&`خ) ucW#2E-^1q4>Uaxl#b|$Wo{xW)s/>|䴙$»I/G:y';*Bd"Y2Iq>م߅4>h*Tr{qw%8]0cD!RD?vW鵃h]ϟ)QJ8 i2( pH;Q@PJ(uRM9A$tFue5 {[%k[,[,2 ѓWb^ wu={qj}$}B9-8^+a5+BPA8[k.Th(C%N㊋6C4 jgIÒcY>qQ-2b)yY݃p]'wzxE /8Ò+I톪0W( \{*"L*僿7׏_MgWsZD/N*Z0g@Sףh3s~<{c5 {zp(ajµTHkuF0njSQJt>⭖-@{qCDkH(-h}b?=&,7ʒL/]}UmS2(pvcPĄ*j?Ӕ1Dμkt[щfeaCmil#RrUg0^ $84a>)9[|SOu~ZA97~ێNoKlV':FW\!<=*\}.(|o@ jiߏ3lgssk#Y i˹Qu&UtQʘ̉:DE\<e5揶⏶G cTF4}=A; QJ5@glr{ [">^HYku!,چT* jI^(bЄut{<#DNj[6n]p8Sr8i/^^HwC\}vqjWmn;+WWLÜ&3T (M!@=^34E&%˅&/䗷@ɥG`'T˹D!&(O+MqNSێZ4|ŭaX}o.aFQsdnU-\-:>X[Fߟ!&{}~Uƈ؞֝F*Y39{d9sd9{Ev*KҟlԔ2޳>z@+8J尺Xq amD-)6D#(HqMe`쨶Kob;NNsSҚ[gռC*Ohv=\N'ʿڙ8e _/=Y/} :|B{MI '`LyarI0dO-:ٛ?=EA  RK-z 3E9hIlh$kAHi.i(PHKFNTI DC6bUm{˭Dx5Ui Ef"BYBTKU>uod9%IBHR( 1q'l%I=V+OYN@L6 PilI&, 2`̴CkdC%s6(4tSEp ZxJ0l4y r0gd)djrDEXPqj)p(ѢCcm*ڨ<2M *J\̣ gA qS1!)`s5Mh4䏸 A نW:F4D Ք3!߁Sx. AՎ48E[q OxCS>Dq:f?UED9Fq42SUfT(BgRpԊ֘ ,y(N#sbe} !8jW!E{CIor9rIJ\J&ۺuq  VXaC>jsy䬘Bw*T@ᘄ6)ς7&|PChegxcMU_Vu fe}; '"Zh{( 9NBIݎ"f/oוAjQނp5L}L |cFV^c>j YQ)䂶Yhj!QjFD@Z. ur>4M"鈫0^'hqhF" ,XcoCQQ(%TQ) 0ˋx-I:F y S$V+"6wD.I~2Ї䰋LJQn? vi x75FPBr&^CrH4QERCb+!{!hu>SzigbQ1u 3$.|o6BM,! xIg l75<|Ӑ>tE"}鋜H_DHlGH)Oٔ:kX22hXD1Z9[ fO BŠ{\Kz^O\M!^VBYIXUʙ/ hr7(*} :6`@Z{4iqZbU('ۘlػ7cW}9!zt#HN^bØKĈKnHr~3dtOs(lKKWus΀YgGBR*TD )%ݎS(ngJ/Z՛R0XTR+ `I ֔}unJ%Xۭ$3~=?jxBl)GB̌st-T8MЁ >Y@6b )[BS !h-Y @No+gzثPT>ׂv*$#8A'uT5';UM0cC3?ǝ*&4bZ6Ֆ0fOBzЭ}hR&om{%ReE ;P;mm^]آ(Q)gwJuVYE4OdOtA%c! egV " o4T*)}QȽUV b`L)Wj+T!SI!H, JUFAsRe c:Ʊ۩զ؄B pJ!BűhfHi9&`-L %5 B>xcRV}c.ڗ Ή:6 Q(|Uӹ .ohbmwOLBv3ƚ  a:g5(ֿ+BԱijP >QV-\azNR%L,*9xs5!|QH{%S\e`D"l8R)xй 0$#e&` W4Щ6ޅȭf{!i$/}0z8[[yEWNqMͅkg"!y?׏yaG(5@]FԏV.E7{: QK^`j OhL'3Ű~4YR"q!*#,]@lmGȴB)^>\7UhG&W_Ux:|NٚVƺ(J$[&A* z A9X FH)(QAƌP dQ$ĘHk*J!<+O;ob1(.9c*w ꦥҀyZ?>_G?Yc4!XN JSBw oQ;Dƌo>Y||=^`dZ07շݷ=`L]Ltr̨6{Sg7̃ge#vQ{qedxG U1z?8C.2T\ڱYxXAt Bva #pɝ^U AQP@=m'p y֍Rnco 48RLI*LF$r.P4&ғcTO.&bF'Nz/Jg j)65ڀc jWI;\qtڤ ڜû/˧lG/qU8 &[z,hd>O*׷t^ P/#GFIf'F+-ޥTi|ڝ͊ VK&Ccʖ*$WӃ}Ƕg~ 1d1A0-2/muW:q->^qeOnyUٖ<0Z"eC:=)x3+8}bz :lpoCBW:% t$QHꁱgAyDtF%8*9;X_ ~8-TRO#vGN>ǤBrH"{IFvlkXSq}sw.c~m\4yJQJd"M(ʸ İ 84#8Q)F" g # ϋ>' ivSq?o⇛xZ'TUqn BzbdjBLB%kT ̞buA'6`lmBXu.]* cr)Id@{s'v?^[w'7U?kykqf3*"MfLrH?<͑*W4w41J$TS&X2:TiLY27/crI2(;~$4XS3ຠ.3vI%ʍH'IuNߨЅRH,cJ0yur^wxiy ~xILg kZ%8oD2N@Ƃ# /rBrn4^QI^>d)IQl3J.xI2Xx,ْ!fOxݩVܽI΀h,PLs"roRSFhe"cAA9I:#'}Fܹ<3%%"GĶQݮ<} %i:謹fUELnyW \ii@TE1N$j BW*wQ? Pn:a{k͸.> n!C^UI!b{mжQSt  `nE[$~;pt"Dl[9psz8JϊVA.(V$m$Cx2]G nit6.kxl  ( t<oOGg`= ᨔ=d1M){G$U +HBu*( J2S,:B<ΌL˜yPͤAvē_EN;c2-6bպԵ=uyAu>. {}4*@Vlk 6@,_<\vwKEKI`[,`e+~AԬUG/og'/ :oۜMg ǿ]+ZE%y3՚Jjw7oǷwiwtKVܡt_Wq@QTwSv(*/ B8(?Β+~Ѿ|:6RRӹkp5;f;&eI+K/aөª7 ߿2ZM?-믹T`[ђr7[|cH%׍!a CZ/Lr+b Pv?3(FWAPHf\9jɏKa WAYRT TcƄ'<'Χk?>o>D|y}+Qx5xЍVh5=VW4پ(0<1(kP3c5hY%Vw Ipk/-^!>Ag=*ex/dEE.![oq԰Ho umMŋ*_/W:cUaB` -QU]=*^wuehB­9|Vӛ)RUZ rmrWigDuD?? \lӸ7vB]QVfHe'+av tx Q!nn2@,^E-7;=gcF(fea{iy-[Od''J݌]ֲ_]fqy_=g4c̨&r&T9DUw+ZLynZwon{Pw9lH]= oV> %H42H 4Z$pB0"r~W*= (cwË!n5;Ci.+wCwo rQkAb+kԚ=֋1Y2O+At%Uq$'=.(K-of9- zs3 #NqƄ3X%SɺFg6* 9R X±ǒvrče^h80ʹaF1bOzV,PʉC碪搢*26Qj[fdpf,Ztv, J_ چҎ&](|iDmA ZitJъZv8 囻Ĝ͛fma|+A 9T_W(N97[8=<40B*k ,Եz<ކC`buOn9[XO,H"um+n ލl1T_Iiwzo_̳wWU(n_w|s ˣvL޽Kps\L UPd(YD+m,I"#nxjl7Q$+-m]p.bLrd7Jlcc}lVwOk03~HbWYR/^=کKO*Բܒx\S:]H" 3n27w ʊnLh><޳]F?7)PQZ2ɬ36$*kM.VǍ -. "imLD&x%hе- ÃT!P!y$QB }^zÉTX53*Pgu<.K0,$^Io30pjь/-:! DpE2)BSk,_Fu _|]8۶HJN1>\(@!KǏ'nUpQ7k o2stC?&mdz >fVr·;Щ0EAK%\ UneοzlSN="kSSz=Q^M0mv쁫RBx%Wѭչg[:T۳G^NKͻЃ99U5y20!9z kq#!cD; Ve%}UiO Yu@x!U3MM2"kX;̏lrbz{b/fw(B)?iqEn-=OLN7s>~xl%~~={HR>]1u=3TFX:0=tHkjrig$9DkhZ|zt}eJPBFNgLCNӓg̛85'(A3}^0+NlxsE6-}x|5Ӌ>$ޕ* "mB,zBDP.j&)e5܆$I<\Zx`5;#֍ L@.[tԑBB Y!3ϳGGL_DxF+ϮTI$7j&5")f}$D# =:9jҏn;hѭƽC-p8ӵ#WpR+7vg7ocJ\LwڤSYCS5!TOF`x{Q@<Ǔ M zղO&YQbjs<0?+?޵f,vUSZ.n[ uz=\K=\fUF޻ܐ mK`m PxٝBi5^ͦw|TBJf / O826דzqX@nK0BԷ\ iдv.w( 7!C/=h',0pA|c5TJ³R68겋z @awOI&\ <Ʌ}DEIlu+Hj$4 X8V^ Iey/i[Xl2Q=\&!I7ˎk8.u'jkv!߻hUT:H" .ReH+& ۋIPj;]jr+Ц(K/nX"tW.DK%U][, 'f7av1-ZmQ {CI]=iPPn2VL&շZEBqh׎xWC__bSoDS3G_I˸:z)[. vRRܛZ+eR`LWTo7`h$x3nz[SI64׃tL@j94<ZOD(ŝ2xxпT"Hڼ(эB4oh5 4-2FyBci`Nhj&!9-Qs0.fOdy2әsNk]) / W`{:6-ȼ&dSˌ0ignY#,a2ukw',NشU"np7uu'ҍH>kg"K\5+YPٛ7 uT| Kkc\tsҺY?ÅW(o?zzP3ilc5u;@K1z~\gh¯eqB\or.UWsYb#æTqdWE{9}gx5'x>fH Yg<3Ls_8AbǧjN>緧)#.]eKV9z_r"""7zLNsɧP9)EL^B̊LH"3- P}SFa@Lo=\k:`YlS "HöL>e& $tcT9 ::fљ!]o0ýCcɭǓ}PTuF .29}t_SYs#;"0.~QLϤj5ۋһVZ~ߍRH:aq B`?M'5m$oy'KO{gIMfmWS,wq!` x}S^! Xtsچ<0 yLZ5q?睔ʼ۬^OrIB' A+re50)Pw*BQ3e\(|9R+g.)-]+*vrh`p G'a4_ &}wFFsdj3WLgE@ֿO y.bCؐ"6(7Y&X" D8&| N@B`JXoc6k%X/-F8_gϳ;0AUi@U}\Gx? M_fgfE顿NfI^{//"!ۻ@(\}mOCJ*iwSu J*}]"w1(we w|aE"9h4:^jCCPJ.(Cta<~v{G1ør6Rr(ęvz>ۻ.O>8tKlOhoGKEn0I,Gy?EFk]^sR6[Ċ:17M(})=SW >C֚^D({ .4pyAɹU10!x>#9_4V䝖=hfkj(”`g{1(PTULִPn 'AueXԪMKUbKB:rk]4"}. {VѠ b)(Rϲ5Rvkv6)Q y VM.w Wm V1ĂRɉXl`^Ċ7ٝu=LâTݣe'o37>41 ~={q _R/}?Ƹ^ߞ*\+F?|l÷WWVU~&-NgLw77H*" 8W.JQ61>W a@ﯨ4yCmqʿQAj/aH.2ij\xh.*cP{u!{q'#Y}i>-(uTn,*Bps*u*)ԕJ8w`"yG.E q@9Вi\Q^>_X) VQrG UXB]Fm=y>(t'bቓThXQ0Z]+IfTh(re0KN0A p \p/:x:A8[ڼW>@~ *٤oA׬rD$IiCvJoB\FQt R' TIkK|\1h\%Gߧb`,,z œ͜C^hTL9-UprWTP+n=Yp(<$&kKѺһ.>%K N)HP$ Zyz×#9|L(AQ!R!5FiÂN} !"0w@R ,SǑm"%HsiX=.d1%/=P#{oxm=[2lT/{K+~T`ZZ۠=XzĂ,PRze$N`>U#\ګNJ 8U+g +R4gD'KJiƽ~0'- =HH_in[oo1M@RwQ\sPzW4X% x~P*)5P'°K^.FN@6G0pPȀafewHhK|Cw=%XrLt?h-N4$zmDs@ =XdPD}t2x`:q ǪU)Q4$y+3* / HEn'F4LSB-f]m1C݇G_X>qfcǿOĥ2uLjjѥT=̓cz`fB ڎ{Rd%,U !Fe:5(HHMSQG%f+6}Fށ&A%$jQ5S;tf\x[㛯QmlЀ`V2u&Wq?[.4[]:mwEwed.'7F5ˤR&\s=f4^3ۀv%ɡҮǪK:\Cs" tGt{w\EVO;s-߄vLuGg^yj*soE9!\s@S_2p)zrD =&ztSLn.DxbUXV~:\E%&'c-x+=]Ad}w:>8p|:<&Aw xz=VlB.j]y,Ee T!|-0$ۀkP> +_['ysR*{A@.E]ve_YrKЭF[~nM]Ó5u9?PB:H:DnnrU`l۴Fibk%?7 }D w[}%+~4׺7Amvd | [(D |`'_mda/tܒ? ͼ8<{;|8^a'?_imFdh"u)vlLڛ3PSc7C>[ J;Zm jӽj2Z%V8P=ndfճKleeÛXYES\.cH/vޙYMFR "K}`_>6YACϛR0hO,|v$n3]OVR7#-4VRA3Ʀ"ĝٱ ܄E,SLݎN#]1k n"K"/^c@݁1" xT҆٣ѲsS"n+nw+>3]rĥj3 (פuh58' %q{te;8i,auZR e6*}TV4EWM'~ ;ޓ/3qGgk%mgdrJLeS`BJeT4*9r^kĴVH XR(mSa9d]ATU66՚kʲ5Hkwt).TK?Di*,wu]N6f/Os3JCj4cm2\>QO|Ol6<G;!1+X2hX(iV*P ;jN%v-fӐ|֭f/EGyz*=cO=̄j_1'Րk;:DҁGIܙ{YT_L}QUBe ZQj>u)"L㶅{ 3״u.E`A=ZNMPJ5;HA+]~0z|5yfMC`{Yap7X0XQssssy |'ѷqL7CӴ0т d [:0ƢmdUiw'O}Ջ8ƀ+5`9:d Zij*q&g9Sҫ$!$d멘 ;d?U~D xl$F܄eqRqug>@S7cUUabH]aF43lCj_ ,l0J}KuÙ]t%ٌHeލI1LW>8H,Na'onhh&r_&]s#Hȱ..`N{-X,ş RnOlCl_d~u;aTFV /ݳxP)Kǣ;hM} 6JҞ i-LΊ'-7v1ܿW1_;ڷf-S̏&LIIۋvc$sL:hWHM;:*j1C҅;fddBh0E"z1JҎk?T+zVŜVKf C%ȊUO&|aԜ'VR^7Fc 0lHJTCnD40_#ܭ\raugf,Lo`?mYg&KBn,mEl7{h{/l%w?z𬐔lܸ.Lw J+ +DR9ة:H2[ASg&2t wL UB"#@gŮBi[ hW&)~5o\^UwGgw٧>l6$ 4jk#Fc9EWwrYO (y6W"lw#QY:|Av<[ϛ>_:I;zu 𖈮]~-J(J-EXH̏ٗ:Y]s;D}yz0! G7V1L:_ `/ᝏףQjzoOg7avΌ]H0sM'8[t)(xG @mU6Z ɬ.0X,(<xMdo=r vy' N:nsQDka sFʧ n ?]'`<$lG(|z4/@,/Z'{/?&+9|"6z|?_gӄ dG<Hg^I,C`omN^kb؟_+0@"^X_],|0Vn^ i04"x:XP*w_< u}i,j] ]p>?$Ìdl1xYⒷzJ:C()Ҝ Yӕ|釳A [a^>N~~ruɏ'~OK3>y ZN L`;g<")˙b(ltDx#%!ZNz6\u{puWItTjtHKUȬV9"#Q p;CВd)Hex?4aPFǐˏfp>5. `VkD>|:̇@Tյ˾tXL޼a2RkAHk'ES:i"#+i#Bd9ɽ eYiШ5>0;l6p(5v`k`TWG/ɑYwTX/xdXAVݷ|~7 Xb, =(:t 28ga *(6&3Pjs$0F{G@ vbHc"i"c:,-FZy]"F`Raoo@A-n8-RLqnG/NJ@Jjr)?&{0DAʵUC$8M)@4q Z# q̼T* Km#WҪo,}=!P ⢓,ZVq#@hl%)f(B̀y.&`ȹ xqEsmem5^G6"0akFggTW?"5m gzs`#J)AQ0A"B3 )oaܫ(X!p'4ua9>"XJ ŽAVF݀TsC V7C}K0XX{`SgiЌz~⾄E8g&"6m/G@=ɓEC]tǮE; 1 - k)Q3ǧs]ymvrW?  qgޭwmmJr6{2WRe'9 cTd+}CrsPeKffM)l7=Hk#sqeP޸%Gxծ $5~6mS{ B^çL'Bm /\Z3Lތ6o7c»}nOf6x\*Yo?F.Z=}W:J<<`jPI|Հ)Ic+)K-AP6lCe潒bAT^saW.y$Jg#^ҏ;)gO?>Gf2-\L,>]Hwwnt*\m<i<}{pww z+=d+%*8_P[ 37Wݸ57WXݾZ iekH?jiJA}Z2q*SՋTđW%pkKgN.xoYrof7^}xA uvrCöJa]F!g~rV-ZmV.g"E!`+o[P"i-n\oc $}p×]v0JhE9jw{J0Yc}5*@K@x=]utE7~0d?q.,sԖ;G˳frq;/bWRy&@򳈳yS4rp`e*X'Y 9e/ddeiw0Z} $~,pUvV2?xݎ ?|6н@z~60zt?T\SQHC= r)bEg Ǔā߀SBAgb9%;~Dpi?ޢ匲KܙϬE'1'>zQ~aKs??󏙗8`lF$řYz8Z2~4eXCWSl,251 UdW=؟d-|̣hL^p8NS0 )kYt+_4 WcC22AV*1?t8bhv;M6TZRl0{L4pa]2H6=;{51{.Ȗ41%K?CVYf) 8nb]{?͘3|$'KR !YKdFe2$F-q5 W9?نڙuutZ Ʋ}OT%h7=-J'1j|$`LT$f UI EE D+)50HX`[!=r3 q1J.eC¨$,@ЄМ#ݰ;*|զi vwYӌ  t?;8'eT6sg7bQ,uc. riҊ\$մWK ߐU˥Es8?8G`Y簼06B XxӸpe?"PXeAkBډaK?WGJAYR>#BU nQmq:7EKQzF\1rq{uz[&; Y&Q`Ёc6y.FoIK)&{(Fa#A$_I} Z$ [ %X ZR<ԹdGp$Z[:$q9oBr/TYe^Ƙ:%N(֑Q!ɭ]1VWUœwU jEzֿldgĺwA_礔l-8+^N€F"-cK*p+Ђuh p=JebR7{OR5AꀇO#V+oQhD'iK6Ic)FzYp(J?2 |3hѓDQyƟ< ԁmՇ?5Rv0%=z*8!%6G%E2]%> X\1CŨVXs.Zt>uKUR^'2#2r+ݞ<ޛq练w r~LH&['_s"!R(23D%1ȧZn\lp *2-h˛ۧ5l0Id.aHa&30+Fl2K0s;<>,s)n(Cm bf"hn%@H#v ;㒀J:QAD6>"QЯL]zun ި6~dZ! m}K1#sLZϳfzEACZ(yK՝_iU=LYڙGLD=;<̱Ec#o_X};sB5AKfN9`StG{%sғs/s{k%^Fn<^+`i hԗ4 /Bjܗ /# |<K}4 /)*?爲\ю jO{ong7V_HN%~t}*r][\<Q8R]:@|:@: 8lrɫ~p9 SDzrt0@HEy w0 f0jT M1*Ek%^ cOA_*' GD&PuΞ.ng=xN.@]_OٷgILbq ] ®D'v_Q/]BJ@Sŧ9i/uc9˳5wWJi5Kp:a0[N6L+2\e!X܂G*]6Vi˲OEe 9%l'Yx!$Lb<{/=cI9VkAiH"NTVNQ \όH!з~^CDu PD7W|x.yR}3׹ _CEO)+=HQ#=mF/{8AnvߢݞuRx%ϭ`N@Y9_JB|a*aZq8h ަq)1JQtxslW6*K~O>4xmt u4pϷ=p32q^/gYȆ/Lo`/Bxuq . >? O^<;@j9̈旳; oZGDkjI`XkaV3gaShfA3vфX0S-3J[КXF X7&WVSF%)2ʀ@YSKC_#*uxIB8s6k]RTer4P.@XI%JjP px`fH6ֈIe%U%ФP݊(8grj[K[JD/f^Sˢy{o%B\o`itZHI-AA! %0=R"Zv{)T׷{)ò=LaN|anEwƙp؆wSlC'r^Cykߊ=8 _ϐ<ћɛ$=3:owsb([ [r|k90o|z8^gO~lnZl {ҙ+dzjl IwEJN<> Fqt$,{~{3~ άADGBӶm^@|wyf6Vm3쯐S6/(C*kvi=l=p&i/: v>7 B'؅SƮ0l#ooqwd^F:`yV|%w]]W~Uuߵx q7XjVK-Crb0RI qFſ|^^ ;a8JomcV2`(&fYٍ_k+}^]?,kwvoDzk$%6CN3Ex}XԈ"ݭΗCDD//)i{tQ8$[qD op!̳ N &S?oȚ$dM.vp*(G:PqYI\^;44_!8R<W.fGfYIEȊ`A FLQxCBsnbH#XWaCi'nCԺ?8ci|3_W1;U>;s{B7$9/>E村?h fv5U)7{}RJ=d(O_qW _PB˿=T_t] ~Oyb>aN˟?\b<ΙK}A"W0o߾*7N;SW9Z)'p\bs4^6x27 Kq9M x7 IS0Lbqw;=B\^KULne\0*Xj =:ZNl-P%TA 1qeBD "!)*HNjRaisLDN#P]~0*D-Fq=:Ea,AcoYT]ЌN#,:w'iq0E 5A#)K(Q#SXu] A:Ԟ-#J ]SRUDID_MUg{مjlr)!~@*a^<ЫOiQA;w6=ߩoh bt-fpоaP (Ah/!p^좼?~Wwg]NJ-Ȼ "K F&HTv4D@C1(2@ 2*?!%d2ޑImE U&pEd~-*N\_caxàH*PeTnsڠ@x5+gazN-k`g}ͣߘoY ]e }.8j. (*JNJ"bL4볦FT(h0\aS-whfPֿ_aC Y! %}5z tρvԵNeU5C% ռg-VzTc/X1}y~zXe[ߪ,A,j@|/m>ڌ$Ǽ%Bʋ,:/_n4ӀVE -;Ouj&.\oUb{ 5UP6Z__뫹Z oy1C|U_N,๘zucYQ26M ~~sڳX}1íG$CÑ409pȞ] zĠAhP*7jo|k7==/a^ ռy6WPGd5-RC{;~/>]/%įsW z)aڰ]Y´s%LA Rĕ2}AltsR"M.DIC@2Hu)W<)LAJF:s պ6 ]2xˌ3PK"pHNg((iIeqkY-VrN{w*./ _ыGnh|f]\noVU>ԃyyt[2s)uKiLciK NPhmNSzEc඼;[y0 kB Sǔ2HBb *4+#L qςk4YuqV=՜l[h@f@?`I@C@ۨ7U<%UB)uT+\~ĘUl-770W%ԔsEЬ74[+RZT^P PFng?xnWyBy>_ݺs͍Z0y']>4jvW⮼/cۄy`gc0&FJczp`Y~8,SlXU%Wq@FJ~DۘQj׹[g 7 :ɤXVZ#"2R9Qs:t{Ƭm4AKj>G'PȬS$YF5 ^JpjK1*eNX(]!f^^zܰ@{l_ϏEpYuڷbCH8X4Έ'ZIELVkveAw@epG ,@'8. \}8)G3%WGq"+Tk&̛}T4ey‹9wں8>?#O~ֲ;+l_=ǟ]@7kM3J?0 ʥF_t}nj?v<Ѻڭ EL>ޱނ7UdZ `CbFKʥ{8)2X]֗st]Ra?[='=Pˈ 6z4(U9K5WL782 :~ ( ytpIR{ʕ!gL΋x]Ojd%"j耈,S}ia A`h!ׄCAa;{zq9Af)ö GP18Gyk 4 ~Y(:.6IU{e`cp_2\fz͝wzd_A5 FuO(r<I9Br'u)E3,DF!8pXFHH9 \HzV{3P' X8tT2 ;%,8'0IOwB 45( Ik288J3bGQr+\Z邠^0.6^ A!YaV#zzvvrqwjyu\[ST+szvmsZ)0Ԭ~FSߞR?(EndݺVLRT*="XQ+]؝eevcGqLKv;uH:nn +K|eљja=t%L6p]֮=幻/TI@']7_S/hCEٷݞlCe]QRޚ/-,-;)*Ԅ7 }| 2O:ɗzJ}7;|\K{{S{۩HBW4lT" bw} r Y=;e bCEX6ͼwՙ?UƘ궶SZg|F=C'-aÐ1Ф`}_sPqc=bFI6R{泍>Om;P?[\dL`B %)gvΫ۔y5uޱ5oeQzu]]KͮnB=5>orR}ŗ-8]5y)?o?q)s!bz7)I̤OF }Qf ׿ о{şѯ$e`C[Ѧe}Is oF7Y=|Њ:"0Λvd~nh*zldl׳~`\#3`"'_scAԺ..Qkcn~?pΤ><^l~E 3ĜTmʒGY##:zILâշ$F *ڰ^M ZMZ.o5?f>/e:_sE>o-Vy+C=~^)*Rw}ݹvl:?yIp'`;ijC9_=L%NN0ABQ,Q%.0p XYB`N]ҊH'\$:AqMU9kuޡf@ vlFrp!RIі/5H)3&@ Y}M[7%(: W)ŠIVKKr+Sj iVgW`\HF`FL%deim\ qPRR˄k+tMŕ4f88j}zyyy41(ѲƢ@wӠ U1ڋLW.|mx⏯r~~Di9rK2IfCLhu13N# Dawe=nIzZ/\Tއ>ؚޝ!٢&{xX7l;u8-HbUqef|,P#L\׫_֌Iˉ~rO;\9 2 i{xMhEY1w|\M|~X}XQzػwaޕ6G.T͕9c>Ng`,v1o uI|===O(suݹp`ZX<~8z#'\۴aR!l-拈%Bln# cȣtiTa a3-?I"杻۞zՆpj )xf8+fǫvMa$].؜Eі!2\OՑڤo;j!y+̇"ULжfx)'3!p2- 36!]6aͫ>INDqsg)~ x 3R=-?#La9D`h2ub#3,++? ndr| %c'ʋ;- )Btvo͊3ńK)y9yvnj;bwyBDvU9+$ *>uvPL=hAZt}E0f͢!^KRR(WBCH\ȑX ގVMo6WKMVLQo-ěa!Q&L0k*e1@t,q ra&^wB!CMZ)x.z*,> _dbnWG $C0RشG+* F1Xj9so NFf5LJzLkCi]_P~m(d_܎; UD3\y̘,ਘXjʃt5r^8^GD5lr +KoYvKBGh6wenޱ&=`!f$ԮNp H8^J eBj`%^''LA!&a:]:o} &7-W۷(|hFpG&3C@sH:>]/0{ZLm:B=hE pZ0tv  )H{{Y?K2BP 23"`gh%F0 3JHK#}:uT`y]R!!=@=1RC*)܅ !H|tnWng^'YI)Y]_3O0U 0^"#t !:FZߤ§b: :{5量N<ԐZKFVӷ%!n$/VjO@Mn/KIamYD{BS;ݫAOBmk f.,G%ᲐIa!ǿpnrN0 0y c(4*,,L*,l-ƚY,V0"DYw@A!Fchn7Zq]a h!<]1vg<ʿy1sʼng|>sǼ˿Ԃ\ pT,- Ʋ5@׭ N3t;1)"WȄt>"M8bn cKys"Be,Mxl-ǯ~nK[.>r\sL.T.3TѨL^W\_SUbS^]`.ݹsH4BB*Mֹ Ӣ_(@UŖ9TWh}a"jPULxjZG'Lk+\i \#VZ몃~Aj֊BB9_..SHf5y6ߴfa}} h FJ[ f>k*H  rՉ˛潶 #] z']__\)x\)Ⱥ8d\I竘\0|=AuU]nk)k`iT"8Lcr{ #+ bI") MpG,x)AsU㈢ YGj"=o!RRr#ϳB{# qǺg-$Tekp7PKMٓ_qnJM'W@v9x,.y"& =:U*G1IJ脢T{ A#5iP50Ơ XS wp 6׏}a޳xa}MA>.P88qP7 7K-jk7 ^j(Q*rj-[+D%`jhHkKX/6 j D[M#J$]|GTz] lR%G""^`BI\iS8 J48E' 2fC$GXRӇUޗN.AMi\k 7fO.g:K9"̻o.2\s?raR.RKϵ1H}UN:NW}ZBo4WA:_N#nv#;,:_%ߥLb9K,0+E?>硫sR5YTy*]!/|Sx6yߺ@`b3t>uU[͵nM>DSDy>$ZbK=ל,ni{O!dE0$[}:gY(64[ͦyn6@3b1.̚4Gp > 6?-xÇ3Tȧ idOڐeۆRa+=8s'cȥT޵6qcٿҗIxUٚJ*Nf>TXD-vII*,qMI-8Np'?;'pMeUaRٮLN?эIgH`TV(F]XR9P:h[GԗBiw ɹ8%u|Ywq lɨ_vZl5 _%wȗS(!2P"%Ւ kKF8\M1XSE;;}іvе/|Jo$pポӷaA .K(=7bJHg(f )P&I oU|zԡuX g@X|u·uOedD6xt})€X谵*b>j|W>>.!9Z/A,= !*')cv_-"xy\̈́JRMJj"ֱZ tOFp5vgB0vUVQ*֧ s%؎4̿fN؋k8>xU_JJ/]BmJl|:Ƨn 'y?<,z0ı-$ܻyE{/]'CtG1 4IEŢ՛eqQN}]ڿ %Nx7!`8Tl&)&IXCd clt|z|:Jq=:`Z1׶[UG/e)g[hDQ-9nJ ߔ J쵛s/KO-5y!r:Q/?~VݦelVh~Aú5Y:@EM< ?Άz{xo02kdU-J!) ~LXE":'=[>ُnlm&E35an<_x4 t~\%Jscq"xEq搀(# َw,3ְ,%+]oqZW0w]Uvy"sXO`s?xc&nR<[/޶gh稹Y??~j';WA+5I즐~t_*l2'4dgf۞Hy@i+$rՐġ޽wH>2InA8EL N(ߙzw4=g6.F|w4iX?Af-]V ek[(iF,˒>Kl/閭H.@]eh.?9[lhD} OŠzݑըy j >'sTtFKҀQcv]8IXZPJU> c0۩W Ն:q 0N{ƔX*wpmaHN >P $ FXk&0?શ^bA;_ǥv\?Y; Nиk_iXNDXr(  !L$:=% CYN ;uwC\wS>D&R(L‰nTs͒RnHqh5L?BECNuC`S9F&ˉV/1lM0_E Ni6%꿾QO(_Ň#7W_89~vjeMLXāsLgZM^Fpvۊ௾ݮ[7-Q9Oh|]V*M>UwAJש*lҶ[DDƻ}HJ*]`r]Q(fJdYn24)zP6]Հwb [H) ْR{,R5~j1L6@5(U|A*WGnNFxZV{zUX> +;5hwzıp2k1L;3D+IpVsʓJ4mc n$rw)~~O?}NDÎw)'Ryp:_" [}I~kUIH=.U 5FV(\ք+M@IB'& FoLxF2Mvyxc-͈2c,,J-R&c؅?l8o0!.ͦWwU5]c^s=3#=zbvugqW7aoj{)v|ĺ" &:T,Od,%i9^0#mi^%Rt$| )3 =5o ~s`ԂD[h v.eI7 ۱P"חp-#@jT% GuS)?-؂v\[R cND5A6]FZkJc/tkD A\VoN hxW|⃽|PQ7__?q80ūIPŻj4J:O) E$L0KvVk$u4w8ĕ3O4K k@S61SDa4iXD?iQo?v.*~~s(F*-8%q !EG,H<~x('u,k Vg @ɈrIX&f;ޱԷcXS^>ޫ"Q8Ӂۻ>v/YH)= a\6ɦ˂XѲ$4r(FLU9oۇUcE:oaUCX[T - f.1Cmo^fd6(G Q\Xa#QJp s1f \Nt)pp\"!In \mIrYT" Zz%HN'1Hslpɑl ̰rJۉlThQu 纱T0P{Y)),K7Bu ד 63q dۥ绉&znʞo;2aFVJCQ\0I)XV;-9 a,Jc[Uo^-5<˶}ѫB!&,$"wn]a~Z]_#h#R)sښ1`1A` R*ڊ2H8GqaczXpc5`JnJ(QM]zphkC{-%:?7ĺ ϢM}_n `o^>IEHa{Y(ă˟#.~U{|}E"YO(7Wwa0xW8+; dI|[l+ G۽TkE&JFǓ??t㶇Fx= !]ē~q|R<6=9=}~Oi4ࡸR_ `6RƲ(k#+\\(26,8op E ?!6%4a /Si#b* s%y+: n<6Ln'[PԳm:x5"0vNT1UP S3B}$^$} (*9Ќ)3eg!+pJ]nXi,[ٵ3"RuG}L,Il>̙]2s}$n^|z{FS.m;Μ&I;Բ :Zqp۞~~R(mKVə򡺌Ƴ,*RI{t!ڠsCJC]&TH~2,߷eJ؋=oKuaO1oQM"eO D @nS"'c}ˬsT4% }s5edM\6 Qpi-p,ѣol8H~*2Hf o5'xY9REv.~)$sˁ,rv;VD+3%*AgJ4~{'ȯy<}v>T )%L>3gKV R1Jk}1q E:FN11|1|w^TUji'p"XO"} @L2y#hUVT߬R<5ˬ~f30PDҖ?~ЍVJ)>I% v>[4%+%xpJG#9Yɭw;{/f}M]1c̰1&]`", "O0ӷE9>1+]0) *Ka$yBe$'1E" Dg,ބٰh؏E^ST沐۩<MrC9R kmt"xX aTZߩ\G1M,r4,?#d'QM'ڊ6ҖbEn *84%y)?78&96h:[,_ ށGHVa`0GʰD`&LaArd$ont%]q2kDC575۳>۾^S4@1H+!0JJ B N7j)ȂbB߹nßʃAӴUwmqJ Ȟm,op,؃I&m!xuqꙑԣa_ei_ucXZN C_<@} v ِY͟}2B?zPUE8vQgÆS%!Ч@c^Ni  ޠ*H֘.cum 5X8z=̆&it}kg0CX2F}+c63K"xuobȠ ڟ=|2q& hlۑOʉmyQf),qJIR|)wcYZ1X,_]Vb<ϯ޷hWCy*;z^Z͂a9Fz>CY >z_8aC6ݐZ~Q^՚GY`5xT⨌̓W˫rSutzN̘$FĨ haRiOK$JԂBdU .k0̅AGL7Z˷ph -֡`C9(`#,ǰNʪ"ګO.3bq嫻#u'NnIVsQb!/8TFHǠ䪓|vI_/I| }ź/6tTq|z*ъ_҃Х8U%CԐlT*UOޑލ;SM4P 9_oW%\ldHEfT.5%P.U3[_ZQ:>σC ,^ ,B / OQ2lL!^tϕ&y]NΫ3qbe2ڕ[$*$Ҟ#_ObY7tt-dOje.{G~|ٿg'󅝐1]kpIMf-6j/hV}6;=]ř}`UI8ÄUs׎w_xMԵii64n+Fע^t_NP˦oL1G_|Mk q aVP3T2B̭h+0ahXS@  ML |Jgz-XKs=l 7&4¨S?GvxG:*8o>Q19(~qX8{e%Xs Rje[l@r;ӣ3VLmXqqěI00K^2Yc#JJ"&(%H(!0nA)-eBfCwz6sDY XFj+Rܵ O[i\PݩF#8Wl5)3X1DUQOQs|)JA%SH&hR. 48OlnTT%4ji;:,&wIr'2]z1n6L>3=܊,+5*Zy)|*?=Y`8a@@Rah"X 7;_}\st u λjZn2qgk,PHiL%x5Ja<]J"IRAUm}]`^`) 3ǔE!ۯMΗ&fUQ, y*G6:%efȐUx Lj ZP*0u ɣz\+g#4ě\!v&K(Ul%Yugߎ4yB3՟fzMCWtocig!Čҋqыzp /LB'g"ALYlTI֊ȏw,45iCD>NߜG$4wjAR1tVj@F UidY5 ʐP> 1vcG 1)NhA؋jI &j>%ʚZ+o|%sȋYa:ߛ-e"vu|6&Z̷?)wx]u~!\]K EA*%Sd{q[P{ѲI}lazBF(|{c\lkͧCkVVz3h{-S|"fihY\3B\"f}MkH[]ɳkL?ʹQB }8}/::Y__%[X`D׶68^E1F֓t%n 4liA2Kߋ?RQ|YF5/u0k9z/NrR/Kg!&z]7O^.˨46noN>9YqCt>oɧe6 2R5E|_+X%ޅ&5x^ @W"/ ; byTE_ϸZsɞhHN; sGK'D0^KTٻ'ō$i}kzp\_ofֱq >DTC^m,A%J!e2h@SiE'{e#%ǠL!- ^&qK\e2 dLqqmt`Tę̰)1"`G)A2 RdRJBIBKi`g$mbNǠ ;?@ 4 MX<\ZK.쬚IƈDMZ P `^rh1+l[BrI&c\ Lrn%H[kSM|j&6v(`?̉bpP(<~췕!z?-Hw .A<^} )ȥ\R=/nأ  " M?lX1xOI Q7e4Yq/{Fr٭6{nUp h;8 dGCmVLvV]fWrL:M`3P XgՠqHdY0U,Hr]eQRQ.Y\yұBΘaX-dT v[”ͅxs83x*1sFJ3'hAA!n 8CwV BgL{\R[0]zR!IИC('s",8yU1g`4xx" lZ`"RKlD"O1e$64SRgEVfsLм+p$uwܠ x;# ԘRbo=>7 Ph#[}FW^B{BJwGݡjsKx:8]Q0ME @@,qxN7W3o\EC -Mb2{hH-\}]hYƴ`kB$EsA2 aY^s76 -ukY5h =u3 ٞ` F, 3Dtx xQتpf4C)8z$)3jpFc% LGc UE{mЙJhtW{~F:: Z 6^!bǡga,FQuEe=:8n@ niӭ?N˭%+? P($ kHjl 1gM;{Tj 2Ь~sbbp׃G&弻n u^釛B׹k뙙  p=9\*ãWYcw{]A>ߞļ_}z랣Q/\(lNqmX45 JkJ[:?x^\, l>χg_ng.G.߂0mpǑnuHɞLqonea*OO펞ԃYSv50/S7K%CC|t1#[xG8IC;&?B^Qkmk>VZ8abZ\l6 !7]ID =\g~4_nlEn4^,Mاix` f1Ax`Mx^1q6JkfNբX $NHw V'^ܩܩx!v2' $xrf|[SJtRkHRu IAӺéMփ1}=P5}_*A3kA%ii z Gi I+^%@3[*Ib^6 e10~cB݃HQYF1F#k=MP.*RM-o.97| D]ueҟ'u3ܣԘ(I -Z|Q`G%!]WܚxFWНVEpQݺSTiԂIũ>{ OLTku$=/|描.YkVF:Kh.vd;O?ݏ&ˏ}`yF/f1LքqyWwj2sJeD }O5x\޾K5+1E"(bx}|YrFuunp7onng;}t5zovϫf>)~%aA߄S7/~>lS!N֔}̃ԉ&̝ak׳w]ăW]8['~nQbе! E4E$Ž{m[*1:,uڭ E|BK'o?v{sסʭz£|xS}bN vat9;b$:lHb %N' Sf ͇ĵKr FQbY2&]F8t{LI!t됨8Eq~KȕИ7LΛb>FBU3E$R[jsuk0y>d, Y4 C̼!lFduPi&\@f2`Fs@\1p gJgP9AR'oj!Mzy锣BpiF\K82DA EGTrKFh k!%j4I3)R=y ,zd5\*VJQJU*z,?iĈ:C{ƞ-qf/\96[pXw{k肽՝s pTӧO\Bx96JŔDu2\>(hϟw.in$5ԸWjl;U6Q`9/#'j'\iw-\uB8frm-L \Rl*h;* 4h7LY[ tN4*8%%  Vq)ES&r+-jϳܺ~fa) ާݡ1Q*#`T b SQGAz4:EhhBBpM){nX[*1:rS;nڭ E|#ާ8n`ޅG(8,v ϻO12,[{]LWR]@=NVZ[[t-:}3"ІHL,6k̼qc2= *F[ SJo!,>pUQfҚ#f>O NEfŲ?~3*!uHmg ʋgnVEwHaVsV$ޤe~P*aG⾨(jn-e )A,[7qEJEt-/ z!hYCa,+'y|3 Xx7`4 ' 7n\JI!{15ܡ}K8v`QeoϥGTC_l!t _vtҠ:7$͆LR:Tڨa8$; YL#EFG -i;dIĊ(nlW*]1#ȩ &Q?)%WJtw^CU#X̥ZuO&t]Jў=MZ}{ -Ga=O&+V9guֶ~X=o`ťWZŅt<*"\7> `o{HUnׇݠ1V-B%p`mkhv*w h)ЊQmI6ZYÄ88cPF <'3[pNN8 d\y4syݿVc QpnބEߤg T^T$%C 1{4͑@9YF"8{U4|sː2 DDQ<zX"9e:l=\EW:Iym LA!$u+ &D1σYnLeX9,+9eXYҊiD#2eDjޯ]n -Pz_nx`BzPk`ܼsyZDҹW-퓹z# 4-ъY%%lV[Me.s(m.jSx)(br&ɆTSJJD}5QJ/8'&Ϩ3rT3vEۀ|"zLqr:|M3͠MǛn&{q &O9 IN R"W@T.^;3{6`ܝK;Ix~t;nAeSj.Y/;8 OSqkg_` T&L\Nzhx奐Yh (LOGdiٱA. @]sWNy:kJs8v %jJIkt:^*xpZS:KvyM9`-ny zzEJmp`)iUn&}^w&g@p lJ 6j|t!جyC6rB5^Y'`%3{l`C~+*-O+fo */@x:'Nϔ%<^7v9X*ЗJc/sJ_Qhf[gE-l3+'0mxH1T33\ phm>SH`[Gs|VF Ɔg6 O}>o>.@;23ֈ[7~+y[U Α2D%$;hK*$ .S >;,Lo7;<9 ,*EUn]y/;IKo{\y9V߸*Ol€I!r &˜ 1=W $T\L;) Q %=]X* |ZrKMGF{˒ߓ ZZ-ꙮK_2@XϽW>{ zPZ#E eH("00{mVs/."wQ8(3U\ |fm"[WkV!cԑ@r3ȴa 8߶yݥTR(qtκXژAH%%kLX;ÂY󎇳uu\R-S)wM兟ٍN7ˏ?QP En5kZ-l~t; n+֮jZe76Fy@0v뵜\vY+L4ȥn<*8Bټс~OҖ[*yz=Y;J?ΩF1SV̙m҆fUJR)7pJȑIA-&FɘrK 4V56\m 0 TibflٚP\%c``m=dݬrk_⊹Lڵ'm5.*zP{g} 4(vtTdQG}y^͂yv4;8 &3%׉Ȳ y>@ 9ф@1'RlÅW7U7Y o|EdCoJj[EpŽ$IlǢ>`l@۠1!C~54ziw<d> /..lA4j$R̀~7!@ N70~.K aͦtOJ>8P%a6hJV+i}^4woIH 98)4޻df2Hxe7o )H×ˋОBT5N>ID }qGVzf@fG}*Mk˨d/0-.%,R&6HM3P5q5X7-5ּyOlQSzUc8-p+۲PÚ:Pʅ̫|*2NqIs~M7.R|&7?oo~ 4B } cj'xHrD4{bgC:G[9²q0GZ24s$k*Pmp#5 v P/(-nR]^+E饖`$]X i[ȬX{zm녔 Xfا5hk.Ԡe8sc1 YF.gdCV,B71$.B߉-_Yj;!U'UV`^fѣ5Qν"ŢСU;rJu Z°#"' >Gw<1ќɚ;<3b;<ږvʨ9S |~lǐӪGfnj3T;WՅe'ƷQi\o?9 8?#?n'<߸1=ϕ/nt= ștV}NmOY_i۹CU*9~W%瞧nFeJ>ɤ\F^cdat&P:X`2N'OA{?x~%ۻit.t}ˋ:ywWz (ɬuZ[Q_b GU=(!]I2KF$+ƐvJ$"y"w"Y6DX/tة(T42ȸ"a60%LHKF6F)dx툜4'N(ciC3.ETѺYtVobIGb@.^O gXj_zD,[\ТEwԥVmक़V=`ҬiO_|TZغV)V|Y>{ z2$rEG+»RU)id'*"gL 4ϮұՊ)m  5l#xlYiN4?:8`)q"؈Fr-GN*ZUi JCg$˔31BY% 3pZK׈DADzJ!>0i;~Vl۩9]G*1(+ MV@#WQC(V7Sea;F5lZS3IhsH.ER]}z0IE~:ĕ }LɄ -޽6Pj*-Xڬ 8&tL{hHEƠLv1xXhTv:Cv sGܒS;v0{霥_(#IdFig-[ϼخtݻ|P*q5s+3&#`T $-3*bɇlÍ`f*;tmc{RgD;VcJ5I Dǜ!jcsL"8&z"E0i`"k+z⎊~l<jLUn2Θ~설', łV+e$\rQ9ȧZ4ŝB b^%Ӗf53.3s+.әU3 PPtt@CAR/VX xaLG b| 0QHI``]0v^'wݚ+b;Rb0Q+!J%Ǭ=F]AFL 0r({)+b]cv}z;24TZ74TP|v0Yj4/KĆ.\ڕlϞ^ohؔƘnL>4qtUMߴXvO me>bpK9G0xr0 4=Sz1#mCM)jאW=gӋ*nѢ58jȕ Z @:{z{ih$G1w}g0{1*p]+EUHc֚: ?'wo:z"h7):ϑE[(-P.ʖ0tn1JI _otw}Mz? 7yMWc d ]k?Po=c==z`k{Wj1#*}k3F71ښC̯eSN{=G]^Z=}:{xszUCMhhyeV|tz[!ÞUQF^jsLu!"Ưv6C!U6mU } AGClVJbv[ѽ/v+db'H6펅hZh3|g@>|FGF[Gm=pJZB}'Y\(ҝCh&^bÖ#P@TFލǿd>5h}˹fBd((4II}PSӏf; .l4n=5v7mZvsnWw{>}zOJ} Of޻WSҤDv݋ ]2Q$wX/|n|Ql%{po.8A[ae?FL*_X6m~֞dT)+3[Zxvs,hAx#)\$ʲdrZ36.$$2hD@Ý,~L7ӣ}{֌gz8_!8?s8'Ad_}K I)߷{(͛3=3E|-zĩꪾTY0'=F[Dpf0*I3juyE`aZv>p A3Eh9a V cutC($i^NroNDԂCձjv]r >PyTN''v!dV@^X 0ZEH B2Q @gj$6_9m@ͻwm!NۻIe=!eMJ  5J($~<ɔ=uog^TGN ƒ#ɜoqi+[=ӚMC]Qt]ľ65ܚU9coq[Mffs1?_.w+]_A;VG^/0VGsptJFKG>8zV#ZR^ \ou*˿U5‹V˷{V[O]Or% y&ZdSIG.Nn"8VA锾wӿrڻ+ڰn%6@-f'nN;x9ݵD\ڻ+ڰn%6%@حMsy ?r>н;~ruϦWYNvR1rc8KhǙ)G,Dr#EխEZ cod,*1OР`S9SQl]k[ ,z@Ĩ0eỎqZW(~ϗ sSE+ϖI nG 0B9Z6T(, ibYnDH"i VSa-4(w~y49툀|xPGA@3s^©M$#:5 @u΋3{g]JLt5 ]Jxhu*a݄'OU|f~ȮT꧌nWE..okn?bbcİhJ%gG;K;9M8 9> x(osPWwIy.x^Km7tNA4Ɯ'}pd)K7{&;tcanm.>2- u{G!vc")͘ FduP|G26-/] rkqiB1$ NC&szSXƯX%9@B2҃ C:ı.V܄ ym1D{,cАss],Orv' I_fKr d=<.P>9o2O3޲r4\ԩag}C)-[ǐ1aJq@c {ENAA56ZpK HPU0g#4S"jUuh]hrOՊvњR _ 1 jפkR{aM1., !Nԑ fNl$PBnŁݺFx1,ҘxCER4 (<b .VJ~_Ɛ3J(@radyG5ckT䭝0(lnPf^rNA8ښ8.v=/?"b5_98}o=&J& K韶彳,wç\,wu` 'Σ';2ߦuf?fŊy/SPl9Jb%"ֆɀ[GPaq.%։Dڻs#)Ƨ$(YljFDՑ b>[סjL܄fABJ6+#FSӆ脯 l%eS5  HP`:kZ ,Jk6!nRpVҳ %0 b95icI۫:*y{ kC)w1Xm頊\lZSg61ƽExT}-{C:h՟Zn-RJܱqI7S*|~</c1^Ͷ6O%) @cj&FY%Goz%6Ͻ7\+n{Y<]t0}U;8O pMa9G&C-G`}!zD}xD7h. #M%A}QWrPwM1JvOzԒcaXu#/1Wa^b~U RAjw?TWgW[tXG 7BG^Qj/Fn>s15yLboեh&lfwS+= \lY?_.w+ͫJ߱:~E*TYףECzC-~ycѳb "iC pD$Eh8!NccDPU!ݶ5#KWV3BISbej۸_am;җs|+d?%`0HgR҈Jbf88-1@~zmݭEgjZ,[OS;إ<E)jG7jyq_+tp~N/+og˫p*OltOs>Meu.]ʭg7EJ#ke/zO\P {N5g/I6E˷4Pfrq5 F5-opzY/fPM.yw7W'g].8ȡ>d6nTmIFY.'ᾉ Q%8Sg- Q/SNK{}Z%L ?Jѯ;ýnx>f3g)F~U3wٔ9OɆz<~{t}W*BhLQ)l=5mJ-b"M&+Z*TJDrܶ  ?n,ȆjЛO9X ͩ2f8PHM\ӔGs.e&2sAV(/.z̡ |+ GKIZLVLCKe~J鑰 wauk$)TG=8@.jk$d?y5pL&8˙u2Mjb 퀦ǣF/cTߓdw`ܘQ:ډZ:}H 5j -xO3oSa& Ij<*XyQ Qj*GbkDMP&)4t"ЕǏWePePePeUY }pnY*q8.!u49hȴLNđ%Ϧ<>m ɣJ JhcKcEnz^oA>DM :l?B;j)ƙ XDzg}*}8' MKE^kt(cАޝq7c C;lռk2ڴC|gxv877=k-=r򋷋opAZu?zVIiܻ2kxNk`w]- q4B8 3&B88h'B8C9Sx=_:vؕerqܚ?xǑ& !7)4p*c9˸v"KOВ E ki^p 5(,0w-qAR P7?|:D;8d/7 DqBf -"Jy:${D86LMp*a b(f[qUa7!;(ʔ?*%'M2p ::wrqܕ,v﷤6U"C7/~a$dzIvY?e0 [׶*|(0Qe|S#FF!ItyvۍVkjv_1f;b>L*Ӛf34SyЅB >}He( M4F[M(ɨ)5҆{=i(bGf(oSZ >.ϓDz+$ 4AҔaY8/̸$a,ԕݳX's5!'| Iłbs.C_Sk) TeG*.T )&&#Kp#ꡳ43L %3x!"qIL=.dLDE&降–$l>b9D*P{XvڣdSwڃ>/ 1V+$ v3B#t 8  ')S=mAL >I;9Q0jI-ӂrM)5|ggv)A餾v<6蔉fnmX+7$%f:6נP!)?82R\D؊CnWU~QAɃb6 -q_?yrxN0h}Nkg k}< 嵿~o^_iM?ϯ##weWERREz|*(rғRNWVb7oOңW6WtLA4ȞOgHV )ڣ,=TZJNC|][LhOTC$\sC"0$fr#MM,É՚llpJ +81j 83wTUrc^SKdک2D9N)u)'j5ᒏGd?u)jFY//QSVYhF RTe/.5 %mA҇銅mEy$k )nVQƭs"73܀97z/)o{:K,_ԃDO@*DyVUБ̓@...οlo^}._7>L3R0B0sD)9s/ED#cAKr3YXfj O)BZvoUA@<+QJ<#CK߈g"7,EB<0OQ o,.2l,.Ⲻi(=9u\(` 1B+9Eta%I}}H!?c2'g,/\.!qӊ)x]Fd(Ȭ*\C".D̐p;J#@Ӆ.W{%3` R$Q({!,!׈w¯wjY^N$A#aDh"hS&QQ@]Rg[:{vSc~ٹh;]͖y`35ݜ@4rKr#ݍ2RrC aC֙Іs=uG4z? jwCXwNОVh }/BaJ}*JMj3v~9{ <R؊ВIFX(KƆgz ML: 3 \Ƚ,Rm mg9"9Bg.g*!LBh!*4?rearr`NH;^4$Rbl.Ղ"L:ynAy`$^rRK rV˩C1>"8>Ş"ZJ/ H2Nd\RiMIQJ[F -Ibl"F+ GRH0Q[( ;-u1 K2S@Ԍ6(GǡQ_CB`CA[dp7 Oe8\jXL-0Cn~ܡ2ߖ WHf\ġ2 Qq?41X~@8s5\stu _׎jn:BE߂S7:g]u hwi4VkKT@CNyWf׏~펲l7[ eT_W&HJgLoǯѕT̂7whEWz;٠mxHQDSLdM' at˕b,ž :/3Pb2-2JCX8ASic35''?R@(xK#@ e @V}|ghfF9ڗ'6YhJ/t]MτVKP5\hLM}a{h.gnctvyi,VfO?8Nl;C`ʹ1!O hfzOMfq)s;9IL{yca' -)q{F~rIG7}q;<3MM˄x><n0htKN[tiKQx__x{rnoU<8:y`\ 4 E# N)Q_ݪL7Tw@Ym`Kn5u% "hyVڠ.FWvRL9r]и;*6=(jBt%;tRڌ>iyv bl_܇zucД)s &a#zK=i~$)=R4[ߧ_:k{(dS*$)z_ՉPL1t/1}G|DqOsR)S5BdKeX+_|vw[?y쯘x74yl$ndzYuif'׸? ,悟EmBM(Mm+SH]O۳ TTsa,P+l@`\"0,/AJe_|BMklլ"0s4w;085fj|γnvssE,X FgB>K1!Ry%j1!ZJJɰF$ ^-tHMJ.rUT-PTX8_mF2|NZEFeA~Uy.cǾ={)O*`9tLX;VR:K`{W*΃T8)W9R9.ZD!n@sG ش/QQJ\A< As^|E鯮HWܢrE\cXI'm 3MuHhiSz&zQ~J߯H1b;+RHZ<9 %qm nS%SD(]kT05_,#-o!>24S͇kxӃ+p5<Qi& 0޺clw۝kh;Y[ꧥ|n[F1\^.82V4˫ZdNp}5RsE lЈ< _m|\~yv!$eF}vuz,YMMң5YBqnވOyF|H=}=#o˹ϢܗuǤo@9* p'Ώ!:#kd`Ώ+:$?h8Zk|,79[.-0"%+80FshI." )IbY4VKaшG7˻Z:~S%uRUm=Vk(Mg,b ȉ/>1FAN$ݻU"ƙ/) TͿ'5Cw g54L̷̞uw^a<ēY#|SuA& «PBmiB^2JLZ}T(7P8EI  pIVÿlnMfkBgiJKM\i*_D{nW`v"0!X(K)F@ւ$`-@5YWqR(n@]i)*p4KLED陫B]1YWgTeKBWN-Z(#}]JҪȲ祠mz|=̺Bn@ av^ŸҢoF;w6^k>Q_2s 0͈κs{7hk+~=rd1.V}0S6BDP-lSn) SۂitNLNzH%%2Q*4#09 k5SՄ7LUw`NJ{K Ca;m/LCLx\R*Qj(^^bfѭiyx)&~kybK~|I:`(8.(ct:gJeH4YHc{t9@_Ңǩ5HyNe@VލҽP͐SrCM ZoK|kP5{V_j-N 8W7u8 -<2hViϜo=#69=mmU$w"SÞq:^}P:=HwLZ )os%ZsZho~.Jٰm%\,Rt}KUN,eCB qP.S ЋLB i Zi cwĞ\W%<$P4vcRirƝuwɕ_a=^\f)d;ԯ҇BahER8) Gcιm;"8$NH(c'4phR7fPb:[Ư0FeD|iHt‣(n 肎qs[tFCBT TZgѧo3sM}Y`D#CB. GE-J[INh^ЁRSZyfTU' :S3,rNN:$ZqwN_JEބwG){h}|Z6@ftZhLϹ)|9ӆrz.~.BJJbXd |(ӄMLDHe hhKDUh_BriR ְ딇nNkDv:ΰEt?-8pl%Έt d@/i{!49 @ y{ˢH~[,ܮx|ޔNH$ 3%X}D|42'e?1CTP5Qŗc<>L3O'ׯaH¹u Z}O!]!Ic;~!k2,>"6'},-o8|% 6W Ea*Ok&)ZbF2^5L͢ҿfNf5Ja~$Ia9mt-9nq?Rbpu,5è,fȡG^Њ^~TՈK: 5%k^j!`#TÜ"aq9*x 25Mc٧9MPt:]t)9EkA} 4A 4N=zs]TX3$ZI* xdsҘ(~y~egUl#ԝ7Mˢ@I &60U a]YJpi;YV̭5VW4E`}Wؔ`uexOpƶU)+9u3c @iiƆڗ<-iW$Kkj콒#JMɱB~+HuSL}N!pL|dnJ焀PO~``3?QO]FF̾ey1_rۻɬ67Q,_,}@ZnI~>vEz"]7"?gM!ʬCz+UU@׎VL ˲b?kK0w5ٻ[py]]^),-./:ϋ 1\~R|^9[^K^./fU뗧VC[u7-_p~o'gm/o9;? ,:eQ! Z @W/x?~'{gQU B<|(IPtE%XI*EL#A0Y@f}Dڳ؁WG˪ wxeA[XM+/"vNZ(V Jt _3XʥĜ4hVޙR: AmPj^@5i8CV 8** 1ޤXm`ŹEV:VB, tIZ6q)oG@<#mW1VU9`)Rk ^zf+g#V,-} p#Lʨ@(g)I% sV]s7W[*}.;OJ`B:>_cCi!M\LƯFO7țf #h❲+1愋9hWT7FjM&L{[xY| ,\Ā,OW z:s YTw{fo=G}Dբ/~mLlH ~'Cv~gdf ]ߙH`mqE!vv 5ft&>tcՠ1MBbD5$ 0!oVQ\~["x^PӻZIW {5RE  -؆RJE1{qn)*c8;q%@fѓJT 3ݰ| %gtT$s0Z1,X:r'ǃ3;x%K``Aç҄\U&z|F;ѵ z2)YjGSDIԤL% NlCJ ֊*!}C;`?4#\D{uܞPkYp,Cv6 ,8}Q'( ',01uTߥR-h}LSӄѷȼ}g^g߇Ջ^D$ozS6+RrӃu_S3W!x1(2f`z5A An7CΨ^>8@Aǡ41`kx:MoO-8k"YSdHݜNS|Nenxx*ꕱ;`0_`@I6օm5ĄcpNsGTi]`q>3a2ؽ2ٗ} 뀸>jᷝ,^6^4WM;t؟JuŚ)JX 8'pj[agxOV2*n xGH#8$wHn#0`?gzZMq٬C캗~ig/n,`% R,(KX`ZɕwPxd ~8BvTZ5±R5퍺Jƺ5%(e+vy[Uue EW䂯,F55ݿ`܌(L^YU+NqAJwL !959&$娄-T6ʨ 9U6zE`ob@zi ton@&St"wV, Q@b2gRڈJHƬ6LFF\J`ҶBjL‰?(A~] ێ[Y!d!V ( ٱ‚тIx PI1G-8MF}@ &`DPi̡l,ekt+Ep 3P¨iOJD + ![}ߪĝT%`n!jS#Xt(V)X&((`YoVH*sҽ٥`Z+WJRMi( vU [^b'(oާ {58Xzd㝞3m_ICs)d{ѣʑW~h5N^,֎6Xk* %&)H*#>*bC6 G=PB(F6^@>8Z{,2ԁr@-bJwaʠ,يo]zkɪ*eHξ GjJ|IØTYDe`9Y8ڍ,PLR $%?M*[PB)8hd17,n7Y̘C8u,h1D;"P#_IAY+sZ}W*pM@ Zu\Fm. Nr[QAGQ"ڰ=uM5x4_scc"y8(ҜqXYeI5aIƑvi~5q vJqsͭ5Z(ss3*,`p3UǀwٯhCޚlKtUC鴖e^**j_ioEc@`FIɮb5g`v4t"``ՙfhk秳W X %c]_05t8%`jYKV]tQzGӏ)fZ*K:Ar}b_., dI@w A;Sn;ŋU@- Z yOې7nZ FĢ[XmXC/nJ/@-{o&;PʇO~]ixMrw_ ]:1>_3^g~LȮ9ddc'O1o@#s/y2mə LOR^8vrٓ32\XݲE[ѕUI1:qDh_hwMK{]YNq6S`DԜ8!\Sq3?އϠ9:IZ̖1qҎ22ҙ0*D;CjZI%srs PH&qf-V`ǰԼiP'&!>Rw%J㲫j]*ǟdH~&(}S̅ se9f [<EHN0{hRQ!r%+Qw2ÏZb̏LY8Ki)]2AU̒\!M ((ʅ8@PzitMT.U"RQ3h8b6/5fa:VY~d%#]<&ֻk2[vKŠDмFH-yu[O.GCZM(.[ja캄@JU9ӟ- ƧN?Lտ}cUgR^WANJoze{oa~nDl{se$3 B\mc;#a>k͐.xZ;sNmNu3>dـ*VG삃 \r'~I˅PƊ!o@/ى跼yݔC6>?GVfn8m샙]hhVS_E۵֝jS $,~d.9'?rB,nZO0m3nSPrExI'u4GpMZH=%&biI]* }u*:[=ᗜL@48a6 /u`4(CFa$!ıS"˷$Pg4Nc o Al8om uIRKo\L ӖۚBߩqMʶе3UPsԦMOMkԔ`Ym'ea.v:n !-^UMm4&b@+ee$6ĪȮIõ v 4m8531)JĈS߻0~>=X!8WSe8AacB9U2)̌,o''e9ccK3j1^weoidާϿ{GMVpXqkW\+5ئ_ޅa^i[&L}z T.xx'*9tQm+uADlx5z*9326 v,8` Q'GJW&9wfӚQw?]^Qs]^ӫF[~SwYM0nz,Eaѿ]4 !cBl&Nf~Ga!gT /u?8Cr7~W7n1wYTV B>xў"5{`=@SANYH`uH1J#i9j\tC{ 0wi=\C^2:>WŗKzեu]Z>uMYb&!RRg^F L5HÛK׊S5=NT[!Sj.>9h+= GB_ e܍d5`dL3NNuv8傻˞:.ΗW`u òOwW 5@`bI\&s>Msy@QjG%9P#Ld= {y cVIeg@l5'7jpK풙 D:ѷp-/™ l >27̢YE2qgҞ{iaȐkC[\Z AB,g[`Ǜ?ݮu{v" /GvGNNY9jA'IOGZ{Tv"h%#ݻn(zK!ׅ/+eF>(g4~(ŋt )utB^/_?{0f=(_ *+SVxQ> ILHcC4|Ba$-'NI ɯ`QċR5s(T{ ~lQBv-FGrJ{fR*řS()mBB@*+XG(ZF.'lj$~#nPeoczS?,Ffd_sz^iOloi]m_OM8$Kj*TuJѤ97t5{޵5q#K6Rp8yɖSf-jeYKQd+$%x1 R,AFwKaU6$䙋h#\x4T"yC =7VE򽝥fVu^qciE$䙋2řt-Cz?y1\cg\įY'/ӯm:h7HLN86 Ҟ2rc%R5㞮ZYj{B[^1&aă^)\_CƊ8aA3 072<E<\ VT[{)6Z%:UF:ЭE R,.0z}3վZï_Tnlr,!}Ѳ^=Q(#1!Ls_2ADSg\RI-\򂪈=\iFԀ*SBF[B+'k71`5v_ 3O/hJ-h.̈́R&t2Ey_P*:c]ەpKBz$o 0JiyZz5=c)&R!W;$>؟"%c H^Bdr<H+//g3u独r+81kI=L8fbt'ݞ,,Ϫ{0ʐHe ir&)* %S*G\D#a[ϻүWyv}Ӂ^VRNMH x𝅖3"5yܦH-*m)_p;7w׃)q`gI eTev[zcn. VRN$R1Z Bhi5Q:2{ .RIWT)wqR}L5XҦc wާ$2^q:ܹb- wnCB.S A \*h(IKQ-&ّ%/ m&wǣib>%f.VgfZTh-+[P ,+͹u報vt$@,Xɜa/|Z'ƪ^ko V-yVZ 9GdžNggrg'fz䓙\^ؓ'y/[r9kgeޞ|Zw?{5ROw=|~3{mBuzW+,_h/&7Ӆf|d<3x5SKs3qPW;r{um: $=+}LBD͓dߖtwJB$մe`IbQ5HSl3|yբt:U)B\yͮ : [lCؙjPScJ:<_]=R= m7|'iκɖ5e#LBD@V~ű\d&H!/"ZrmS2栖=DH_e9E0!2$ 9=pl j<.$Y=6de=< TiL_s6gdZ &6, RJjOp@Ali!x`r*ID(YR3Da\c^3i 58z FӕZF a2LisOIyzAD C@' b\b0pY(B'b6V=P~]AaiFJJdV-_- QҪuE5/UL&3\VJP`Xz|))+D*_3e#xY6gU#X*5CImDPP:ZbMdU*0 y+k6QԦ,\QeKQX eM'i@t:LVxib`MqdXRb7#²G>&y}G;N@tm4Vs` T+{W@d$+X b@q׬f4VQ(6"g#NÅmhn]\W\NϿhUPJ_zhIR y4* FB03V88rLd 3,]mb[w 7-j &.$QY)9#dsl_#8y~Iˏ/F%^N`o'!im cJq_R䮺j2eqN7-88i#\̊&ar}#5SCz.օg I.me.WIǙAq9~l㨩A$'ĝ"JNeT8ah'iFHWJheX9YR\@K~,*'6YMt[%v61z,=]۱5D0/$ ͥt(Z+aczTc;G;Dj ŬCصn v9էT'珓ݟb9l6[n}q}~:Z~kL-m`v6fbnγo1{DqD>PM}@^~l.uw3-%&BђY mhˋ٩&jiCKdbQ,=,U(ZXrBZF11:ma"cIcu ,,ۗ !"&I4Ae:N-EPHd\T+*.=jlԧOq{ŸZaՈ] bu(^ՙ:#t^vw[T\vtﲻ*@Ibu^bbu+@=:h@;i $ʶ}]'mkP!Ր3^toVǛ8} [9w!ě߃DZcoo㋛o0ƀ!B=TeݙPm{$SLhug^݁C|)n͟^NeW ߩ֜[ZP&jﰣ p%JApsLkynjc >j5iHj@$ߟ`%&DywN $@V`W^%ioZRX49O31.X\ -E讙~M 4#! 24$t5L(1D˨br祷p#XDBt1q9 Hnk+:2 4K b.wliIv3ECuft}:3N:-SY@ڝ2ir w{TpӿgxJ)mNY@ (%xصJ!dNa7֞z }jW'tQݭ~fAcT3h}{BeA<|یUJjsZe&r`*=h%]9cۂj8 E*I a>2138͠U6,RE^4mK$:ܓEL-b릭 1W02Oi@&[m9Tr+v s?+7z=mDLU[YsTf=c}}k//\Õǒ5#k xsU^6|<1A48.}MXSЪqlfJl `os /-Pv,H69*Kqpֹ;7ߊg t1+Z:a5H>fʪhilPsٲ^ތyis<1ȁ.5(E0g+\r|]鱍C&S *Ow:u53bnW׻j@q^zlӾ vsr+VQY'MnZ:L֌8@tCtW yq YrMO/7y8 tNahi<zc+O*=/_[y`y˕@FʇՕrWg%2/VБACCbg JByff½K3 E{?/OH&^Nkn[ms:Y6fr$9"2WxsLҰ<Ń\lfRѵ>:d&'d1I*.*v Q '_0K|1~jP+ɺ)ZF8/eZ@SMou(ԤjRD%ְ% Q "#F 2+1PUtR&W,Z`=LaAWrB _[?%d5l|O_y~`43}|sycn!&?Yrgʝe+wVr71།N" |& E2# ,:.C`AS>^[EӒHG+h+^\G䣦u,qIa6ZjA2+# fq \CC 8)wVB.d+]~從M'i [{&{#;?ٮV3/3B$#J~JOwmK_!eÀ}1r}YF2#%ERrsE5{j~U]Gwu< ~651 w ~bR:? ,Лwf~egߙ.a2%Q:e=}w̴2 ג^-+OqפJԞW֡3!Uzz+o)RR~|S9$ͪ-+wL*1D |h2v?ߚ3;9.77Dj(pP7[`{i?t6up*+ypb1] >{Čřa"fif8Kq4i> GUy$ё+TsEln[ftI?}6)MXǟO|&6DՏ]+(.z^Mfht3LCRSpU+䠛v,W^ iu5ׄdd& K ӔKU>ss\䴦{2UdgzlJi,qFSV'2^ȝ<p%Y-"jM}x@o/)n/3*AvmcjVN*H/:Ry +]p+9Y`XhIfθVD%HF/"mTSϙaQɴ IK! 5 f艒^-d1 5hւQrEM|]݆+eHT(e`ff\cFӽV*(ebi-$GX)dw4Q)nLlM\_S6)O\9*#G)ְ",a~c,5Bp.9WWy[KN`{1B*8/e9v4̭W4$a I1&*hR[Z>$W2CT;30zf(Cg$>Les89CXz0Q"ۓr$%hJ!6U0(;,ݒ.V"zt-2sN`6jTB?d(XW5E\2FJZ""@!&Z"k~1o272y\BԍBFqY6pHArS,]dN'Z MT^ZLGqrfnHC"2 "GF ʇ#<+a 60i k)1⺕}-J4]ȥ` (4ni-nJk,75Pq9P5&PQT+ fh|;d0@Pˍ5i!ii ڦ@T St25Y/D m)\4mSZDEĐȵkuO'L-NHmSy:1$bw&:mIMQW Ն3hĪ3@7 ȗ~P g5@^u%w>Gl#VWj!- ;5`3vy;٩U{[jrZ"^S^]m7&ed 7\c͌`d#*泔/{W}Yba(=GRΩ1lryqkӺanY%9xRpr2[L1 &)'d #ޛOч~W?)8Z5?K(-пG& !LNQ|ȍ:Z0Ur:t Bsﶨ0sRAMoDAJ9y/-$_&/`g7}{φCc >\a"J-M<Л4y U]eJs^QѧBssmY07X=Y< DnhӅU[p_`"ͳ% y"H:{C/T>vkAi:xn3:*j6$䅋hR$ѵ?}?֓CCRQNGyg=|,4cL(q>PO2ۼ^=6.~ɕw֬G7<)o33bFF`hJ?+P>Yl?ّ Vw94 -eBnhD.qpK? ]BãMf<{t=vu}:qg9wvJez K5V׺sPW!wS5 R,hK_g5 +}׉wY|%w̞%L3gў%`^6R%o8"7o}\@2#KߍI[wSNˁY33^]?xdNZPj/oV]a:@k$Q.Π-J=D[/nvޞDP뤝/QX%BޗU>i VXaٶ%JPA;ibZ-h띐^H"I!ȵ:cX/oՉR⒲^"#U긳y0:)b*3%Z0ΉS[k2"S.iTEPQ R=SP^+̲|h\CdhMy9& ]V8b4jm Zk)&TRI1)x$IC^1+;(Q{}$8i+$xa@K'؁x"rMNjVWwL*gM + N԰^Jsɧ`[qRז=\&:)u#ˍiVBɒ+j4s'eu j%/WH COiWQY܂EJ4zs7ϛ5E-H0%8_ɯ{{2;`V^XDhisno~+1E:ސQ3p̢&FQ}'}V2籢*T"Btuo q^"Rxe=_gbho 7Nv/n lg EY.x{zɠuRѭѺ(HԳ@Uȍ 0PxiDH2R9<2#" Ŋ8Og)7~qXs,UROfaڼ[>\hH 3 j0DЦ'|L11c9ianҙO!:OU&ƎC y) Fe%@8 Ap>^U)I)n~E|U߯AY*QUZg%o _ ScK=uUe(c!hBƠ`5 & !|\6|2 )'E=^Aᤷb/yOQ d0W&iu?^%5{30z1GŔ\a=1̋0TRPrC֯Ľk~$#}(XULY4R/ jYT}~YP]˨f=cX&3XsP+lb[Fws$<qo/.y9CN$V'lH9 3[rLQ}VQZmbٍ=dԣGs6;7y+ZfDHg)מ~;?hj1ΌOϥsO$j qgqV5_`]ïy)˓PS8:Xk8rDkT&'&𚨔sxSDKF1"u83 Ukigp(莆Us^@\p"B{ S6j(B0ak3`{j2;TT`[G-8h!`_"Al)5}HŞH"6R*!`Jklh@5%e7 )T{mIkI+I %Tˍ6mTF0lF-XvW"f,ՋzXwud Y%m!|ƈ >;T^ܡsZ2{;Y{7&ՃJ0'튢3v ;蛑 ~nr=paq%<ۈӷo!CNA^:G>z$9Xj{WrO}mz44,ZE@:%u( 0JP>DG5#\"W8Ki ,`ɞ% Є)v8T CAAT\^T"Oi#uP0!dS8P@JVŠ[+,}ٗ)܇!k;[<8%)ȬP;Tm )$ b"4~n}TJ‘Y4֔F Ow>FJ$ˈa0<ʰ0TQSbH4nusל޶Di~ܤtkBz東9˔_N.'>(ZZϬ,S^EM\SSgݍ9{k@S1,)f|jH5 0sQer22N zo=D03NADdӞH+P$1,XeZh֌ׅo ef߲fh/D%)'w6O`vaM-(a-_+cj)n+uoU37nZ|^tv:(n'J[Qvxu~-7gI7.9?٢*'\ymׁv5I3*2E1gTBquAU Et>vk?T[偦j:$䙋h%~v)}$Ɗ(廝n)Ť=߭^mVv=iq߭@}$䙋2E8fR՗qdi/#&(8!%d}uS4٦IШ_ݘ[s(0ZY)g`O6zSK/Z E Y&fn+fXlxW7fѡ@t\{P-f:^ބ|,{EwI;~h:89S]Jɮv oiEA.^VgZ0rGVۖ˜J\qa'g2PBE[ =\X3lFGќ#"t:Qkp.+'V'' 7~(9+B{"ygQx*QMkL ֑~$E~qv7'":.XKE@9 g\g9eqjܮ-Ҍ?6* uxOyKeja][H3AC$,vzuJ@M3"#* Hb;P<86".DğD&\GN"Su- RBlIΐh8GM~;`KglJ6+lqD(A;f^8{tXbkF٫j³La}pCc=p=.sV;UȘ(&k=@ӄeuٳ݀XPVg{M;6\KDR!zLŬ=d2L"j)R(,MV”)eOaEú69WNd (V;3dh\wP=w4;)ˠŸO v$bD+0d{Deh'{`JDjh Zyĝ@˳SDžC,e3íVj5bg,IZ"-1М,Z! 2!a9^g؂ӾlG$#236'n“߻|4[OO@>/7>N_\^b4,L."-4j鲧݉9 |^krF9i2F9IO^QW*R2ίozu@ (;E*P#],s$(Q\z  WQ,/!o6)SRȨo'7z&?o^ m,&z8/{|-e)Xq^Ex B}%C/%,d/'|$8t? ld%Vh$Ϭ+DY VxȱU\^0ޯ-qdT &.녟TWĘ '^z; I=P"@T6^K g}3Ex\t^zlvYJ-, r1 Z ?$<ːk:ʛPbKC HOp@z'17U>ckʕD*7tN$oycW'rU*RLqYW&=^[J -n&/BSK,<`3!ms֞'] 'Em~;XIWqPdk/<ːS\ < mD Ի2 Sp~p]E%Nvw:-ojƪu#bݻl>=!$)rLc kZ 9H Y\:n+| ԍ&ʘv+Qto{3"ѡ?ږ|r# ڏ9FlI AmmvtK.f O,D M Fn{[Tf[*"V.[$!\D+' t@X_RDk/@@?hAUIiUQp OmT/4h ꐐg.Ud VUʅhZޚ5exp2A_, !^ۻՍ5׋:b Ÿ ]|ᓍbuVu/˖nͺBW>~uc퍻=̈h50df[jNjÊWyx[_U}2Ъ$};d8V,y o}M x@b-CpÐawLhqOøwM1oUʏI.Rv֔@ǔvlS4=wMPL~;f $?'2jѿ<2g2i".Ji :4d 4E1Z9$Uhe.+#9AXO&6fh㌠N<;@ m5JtohŘ7yrʺa뾭}@ 󷷔@+ucԔ1/]N>-FfՌyc 0G=$g_y`-H ccxm֤[ !C78bHvor39tuu\PQ:`=6P3H61W GɕM㚢($9_:$W \nϫ37}*^Bùwfg3 /CGRk峫 u[k|RO߭k%(mc*zJ&h3UU| E^^1S0nyo^\stU1W#+\|R~ЗApٳags_H"߯zHI!)pCJJ䨧W^I3BX*`E(yh L{8kmoћlz+܉O o"7AC ח">ZjY;<ĝ=-"K XTD ]\/pEE- BX(EGTFQ\^8QF~.RQc wSsHἀ38zO,3I0 .R"|ÍanuuWLNr40%`Gc1eV90 P¤Da # 1lDL4a> o9""Yn A\&Ki2``H+N]^ ?P͋R33=^=fy<ұAhu8~|f*g?4lQh8o__ ?~?z sJ&eeBKUmA(T7G_QK ԸTkg׆EtwRJofHmKS~sGܗŤ]`TAL[{[cqVC:Ye$âe&c0=(L}H>Gfٗ΋ N79q (t&3d˷cNĄ/m4%)m\K{u^jUӫf.g6hQ] keחiN8ۭG4>qQ>bi NAoKc+Qۗ;g LӜ=qҼSi+%gE[a@vu xCQ^!~q:[2"(\IW:Ū'_G!y4M_+VwWQ~4^VXPWPW4:p,3ɕ@%q D2O5a[T̶[e"DgU9uwSID#ɶVt-Q% g#&zE'(F@լT(nZJ3E`|KȊ~g,p_Mo/jW.C&CL8 뽒?pѴNʝ\G*`F?"4/PM1#]]NkQKcVn5K){8JcheBM H /רXi>²L-\Pwv"g~kx{ UkL!BJ^ñ/EݻN\/`LJS,e$!7 uy=wQo>zQ- 3}EU07_ QGΘU&evAhF7vcXKK>߫\d-ظӛM.)GḐ>N9|Sggۻ p@zd DnbTx7 q9RoO \ $8) `2}ݘS[Lփ@~r^i?ضLk}հPHuc\Z*};as}'&CskSzg4ð;lkR,5=ex*ՊqR(-DK`}X&TH.Tt3]mMϥ(D <*fhu/Z4sHWeѢ+xA5ccO՝hLђ)U:r ¯+E)xLKw <Ϙ#m~767ָp^Lzgʓsda/Ky3{ ˒ ! j4oS×KjBЌz3w[}Hr=0*!R ?/Leم8 :)dk:G/i5`4뼩GaKJ5ѧeLR&uW $^h4t{9m @B09FD< D׀j$ڑjD}4D0lF. 1<Ș1B!b@1N(Tp7;PZڋ%G 3M1"kF&PˑԀZݩX3 5jyƀoS%z1VΥhW0J1fhj">A4<3)Xȓddzn Ahq{g^N6+zd&19>ʟһ@\J41Vbp6 w57?~8=%B cS9x+ő*E4}mhM ]m6с S[333|'*ƳY6`=`;9lbӖw+p&.$S߭=ɾ:W}n(QQ>mI?sNDIdpD3Tj@x<yÊ$\2`!NqLU h0G%X@h(a$s(KEN+kCMU |gA\Mqc0B"!\;~:sq>HՎ͂BjAbk#(Hz0^;I HfMoCL~} qt/K-zOP"C3zSf''߻b 4L:`H^ *h iK1Ȍb4#c5Zԯ1e,5Ӈ2'ւ֥gBL $ثE2Z( 1C e`;C,V2.,404V(r){jkZͭNS?E{.Y*)4!to jPPq"gx|iQ6L@p/yMRkxt@0XMRw^)u5`W Trp:z0\{S+Y hz_9=xp>ՌK4|ͦ f?qbzf؍`R2IvIBlj1"$f3޸Zm߽i#GIv+Df4U[Auk0-@;?`ϣ1Z"P-p{͠3Dyw_=}ͿNjϷb73ÓřxxHDR}X76uk.ʗ/M@TKomünOw.ʛxJPiWȌPņoeιX%{D%z=0K{3QH뮀/8QZa維z0 zޤ!/햿ܰ=MdKEBەB ԵPTSŎXNDTk tL( w^,8zR-5z̶oX i_\3tr[{) x9uT%tin2ypfɍ)D&!qv~~{,>hiBo~M=x'eW>",Ga6{ FWbVEOukX;kIB"Hv-p^n:ukA4}Gv8+Tά[BLֆEL Ď\n]к5 Gtu;d9aY2[ъLee|@֫E}Q*8g~ۛLoSp`/,5닳ofz>SF!+3|we$Т"GQYLhi/\*f XtU8ҨJȘ(v:SxA8;3lcG0^$.V=Y52?("AUۑ"/Dj!ASXxªV3kC]1m6S˥`b~Ee4JێEꉎ=T\6T8VB"u G^Q^T3XUN+$5@i˼p7^3l#OXg9&ΎZ>^&Z=5)i YnlRDՁ"Z-ف+U1YH>[!ؗi(W1huP(r,D%sk1%$x o@ Xk ֔; K4,SJ(Q8łs&2G[G / ,Ps$ȼqH|=IߗՓL/cw8x)Qd [ IZ?7)@Mj[g!9Y=@ZՈh(r_B`3{9*#99[:Fl>Dk>4g j<|%1MoǨuZ{~l's9,-k{;ٍ1`3}ϧZQnuբN5.`oݏv5M^֗?]Zjf1^܇v5=^Sxt%E8@˝5]v:C]8~or? /˫`o1__}6,չ7/wW {gckݻ/`>4K} e?wӧ^{iup& _$d2Y"9".JY"3:Did |̆Kq;؛4Xq7l[x['&;F/ 9R=%WL^l;8i=:zi/`Fmv{"_l.<џ%ThQP@,sesӂmI yyߦRLvm@ 1O&;$=tn%̷ه\ Tu)+l)(*{㐅NrCȍʞ37dwaj.n<#o%&)3!I8`\CBX-\sB9w. I[]'rs[rC.L[hk&j#$nj@K ZS Tq-c JM`nG85.&|5c*qu5<7!"گ ơT!gA\5`8L|rh^[Fwٯ{W2 aW?vIᯛw-# ݀ g{=J`\{IYCGɍ.gš_V {ZT)6kߦd/dnm@גӗtf̤9K:UkõfeA lNU4h:Y9cР"fvSyHƋ hN6/3bX:@n?Ӎm5Lxv&hn4p>0nzwY1nc k=YΓ3uIf&s [v$v}D[IZ9=KO\OXef<N2oI(3KgCYzv4g,l chL ^}pI£1{tΜvy޹v& } IR2xwk_!:pb<d,G0ԋ$GVP|:/)30p(㌜='/ .oQg{ g3|^Ue:y `# `kh7t'ܩ7vrzFﶔ=;xE_L'Hnkq;r\|ػm)zTgwnzF"̻-Аo\Et1_yzk=+<9U,XT#Q ]oґ΂]UonJ)&|꼕E(Y7tNmĮ%6hf.zSGOÍ=^.F Xq7"j1tF>еEcg <푶亲lMZOv_Wa0{ㇷQY CVIoKݏjb=_9jhJj},ĚEu>!G-^W[8-Nr-x.\)^k:_/ "}oQ^Hz܍ՠ{Y#&lxSG{4A}Si}[o>,O6^s5-0tDg[81 O"7laHkQN(O~N_؂'to>mKDCR~r=tfĺb:ŵd&Vqsk+W%ar7xnp9נa5q ɹ.NѤEb!L2hs\lހAWK3j)8'!TcXj&(0DgKnLX+:Y LJ] }RƢq_cOqL!Ҍʁлvmi@.Q /ƁS[c6YϷchԑX߆=_xLO\"'k{YꊏݷrZS싨Y=+|S\ʙ@'!{w%4UzΓX"a-уb|'WY0$,{NG@zԓGe9]> X!~g˦aik/iS:9}Eĉa{$9 x6j#9'~C>Ͳcd^G}Z__9V:t|tj nWwrOOʋPn&3y/A.<йKoPk?Ya:X;5ث{;g).arPnwkhs6 o5WbC\CXp|~0 x R95-pH1џ )Pg\i,t|T 1 P0EFklN֪ gFQD>䬢6\c}VT=(p hl9o6ŚMlj5AǾsGrJ!v 5uVQiƼd|B7#e7CICX]oovX1x o~(ȿM& *{NZ3r6tV7Y8S@1j<%u%!_Ms O6pGu/)k6P0 ֊9O| +O~wH|aPoq,Qx=:d1\GZv!4h{2u! lEgsԂOn&B͵! ڔ&ӆ41?3 g]V#0^t i80 @؞xIܓ\9>Էۨ ^IN~NBܩY'=z:ٚ?G[ lU>}xß;T11p6K;jHHx4 Y,o/\ Lձ.?x amƉxȅtQ40-ȹHl=+@#Ia.NR@%'PC'԰>_=mf|u@#`(6WM=O#pIS]x+K%# _yȢBc*ts@Ƭp"t ĵ]p€ALf*Eߴ'+C^x0pΏuc}Hbd Cu?=+B~}^A)E/ EjJ?G| ;3IܴskZ&/:D!FGyj3p'G|_|1jص:cRJB\sŸ9YaA/&զ@%>CKcFi>ĩG&C1P<^O,ՖUZxƫTuy>٢[U4fbAZHee7kG >P*eMՌI[fLjvZ]!S.6 C PGֶ`u''uyNwTȁ HRL)ha9 SW4 PzC}&ݭ?{Wƭ /3f,T,Tjr˪ EfBQ %sФ$lK?9&xjDf0 8W㧩1` K|j{hNԆG8%J1 I"JRƣasMB)d,Ϛ m-%F΢h cx#jzJ_h1 )Qq`` ˧%rT`to Jt* X B4%wֻYt 3:ƙ-,!cbD%[F1ʈ8z1G0rbʕ-o932Ӯ!%0s%AMBfuZZhr_G# !^?2.Pal-KNHDVr糨Uuu"ud=,j!ر[6 uR9-Q'jU5|4QQ,dbwggK!wGUa\@n4, =ԗ=+E-ݎ/mߪFs<*I/5 'bMb{Ґ&/&ȑsP_ O/u$YbJèeU?U=9(YoupsĖj4C{ŐIf)s0+狳wKv3xPyUܶ?A7'0H20x74e]ϼWb~Z-AT1B@ 'i0ϴ)|Cd5 rJ:upnj~lz Y47W).Cj9/FXdTZ`sY~G5ي#ub~P;o<#.B;:cKϷإڐR0i}>pnG4'&'$]ZARCyEz>^ ?Ͽ<0aCN9g,yl6ӏ'~r ?)6! I*rUFH6Q~QRFb='##yddR'^}'x$FW+v063r{yTHQR+R0 *x!OeßN6$ZII:hkjΊͤ!m4AqkQ墈)A)+ԢQ(%Kg>VYo:` )U(N.5aP!>R0W:TwCRC;Q]XLB$8k=DT9Y\wrII#aMSOxܔVE?~xw80sv~y2hVCO5QOf׷(ɂ\OQhʑqz9qK<ݺ>j;tU`HٳߧtxC:jš0I2~c+36pEgᮋ_ڿlCĮE ϵP|?wQcX?RYVJq y9޹8 `fÝ($gfkjȽ3 E^{'3?f bo2Bk Ka^^e ̂U"jd_3g$6^t‹4*}>Em@|g"Y . (%v  GůiF `KʓtNnRBlb 6;h͙a0$e{5X$DS!E_h?BɌg臅m3FX5ydÓ a%BaPgyrx&l 0>Eim; 2jt8BnXLxaۤO[A P\Ԕ4\SP6-q-)@7mXWZ+AbGMJ1ZБJԜc4h$YԖ<ec~s;N6OI,LeJ:ҥTHRNB.!H >u Uzut-p=+Ԡk+J̫۠;K}/߽Vfxi xLPRba1>lWXG`}@,g( ֓Ri$oMw,׃92 jNаs+ K–TZkXnrFXddpشgMr糨AcY}G^"O(@Ǜ3J~ڻ/Y_zƺ/Ǘi5i:C7!cdQE)I,(X+ڧ挰,LLVbSpE!,Uo;b&X(=hV.ܻd:SLLgJVB J+޶!J/3JsgTAȔ,Q%b6oZTx˜YQkG TB'&ZaD()5W =8.Jj*2۝:3#=(yFF<#S{䝕:(.2+#!2@(+ 0\(kؠC"S:p@6\t\`UFDT %%3gq5@0Gjjs OnHkNx:N'͒k9 hV %C2}*Bt3pZ ldj=ڞ=y`C wGolBGKBу=߂\I Z+=OmZ3NPiu~R4i5)l\@Xw}fJT2XkT֫mϿJ3A7i@nnv6JE+l);DV;iO"0V`D 1uyBHn40z--"`0CIX~3s5'/h㇞8Td1]?(K]h1+rFhbh ON UpbBD FP]ZM\˺ڳ4CcA  /pQRHZXXؑ  _0R]m 1w1?c&ψyYosf<t H7q|d,| &a,!!,G}>ZɎ; {a`!#K(0q O*Ԣ^(Y鹎dTEş֩1 tH& ː"PuvzWz 5t?yjKԣ[P]T&;\^Ig(/> W>+M%KP^%ԠcXj+H,| /Yj ͒0EQLid'oups8Ęjڇc0j:8'Brf;rQL_M܊?A7ާipWލ&vS X +M•6!О2-JV/P1MKRbN-`̙-wl2}9,ZoQEԽaΜBo!*!V3\cw0qh6&+G딡5LBkGJ~lG'J7D+_u-2\.*o.Cf3f G)ګ'&PMTAhF^j6Z+[[e9[տlkRk o/n 9AXE4!QV;N 'V]PB)Jp9ߊJ]?fl y~uZO7} >D;nRUg303e9t$}j #5+'JEdv2Po֌1U2Х҄62tD/X-3+d'3 ÛA>^9T"I/YUgAhuD&/vgC?̦Gȷ_v"VG)rbOhKNq1z^{D[ק2+[N A[=;}j;H$}1$aѺRqcOA LOvQL`v|tJ9 BaF H 2=S{ ,jQ/x=Wiw =s  v7e2XdAIxx~[c=fȧv %}\b D[VO5PN'x:с3y@p~Oab\x<N30Fh_֓#~pVJ [Wydtz1Џ2H~bm 2]E#,zu9- D!bTsxҬjboUt޳6n,+¼Hӗ  6<4ɉ388դlSd7/yd4U]U]K lȍ}Adɔ ]<*6>!'>tڒ[`=2#'1b1m0YLn b,W@9+8pgYkLי<'g# &犛it  " .@FWj5_1.K:(A͖,Sʱ%ohIdzB 0 {p^ ;SF{|rg(gB! YNdJ:)Be\דiOr56whQZD&2k@.X03 %A(WRPv;hS j>pƍ-wɎ >Zƹ8JشZ `m Ps_}"]3K{@m$7G DtDqPfqW'Nrf8 X3_B-~D1=U|vQ\],62V JsG4dz;dHG[8썸wEq}<$<\GdݮG'(Al+>@)+T9IeP;z5)Mݗ>ӿx ]vOmB fQA;a)6sQz"H.lwm! P66yH,A9@mi\xD 6*uvjϭXIpVN 8 9+Z}~p%)wm%Zr":فCxK-ai,8B0 Bt(/a?:c>v MJ3-8D zB]vB蠖%Mlc6CTE)oV*je0f*MPkF~V4Ip$:%hlY]9{v-;ZZn ]0+)U( E:KGPh+rXAc T՜aKdCfdn\2' 0SsV  tE᭓ݡԼkf\(BX8f( ٟ\X \H)ׄ8QBPǔ=A}ڄ>َ-{xbH6>b֠1뾒:|qmԶǡ_'Ie|`,ErN;,2E,74P*ьݪ9z&$gbJ'r^Y{Sqq1:h~]}^Gwc ^<"oSKG:vP*8Y|ɦU<|7k۷\?1ښU!߄8sOO|3&/nc%D,*|yɺ1y'T;QijU:g.m4|Y Ed70٦Oץd$DsP6VsT&ZRc ,8zkJYqV̒ٝtnd)YIhE)X>ijsmW*v epXI{Y<VAV6ʆ"wPXnzX MDلa&%dZ|F"5ԨٴhmZD27ٴԴhK +ݝ HK9]JMtWEFu?@2[8tV*,rtYfqH@r!q~BI߃0{YSUp%"r~mF?6BCO&97sl ,W5haiaV,nx3 9m5%a򥦺3F!-OqrL8k3b0N -c,RzƱDb\)iJiUW&g!Kj%ΣsBLz&d;뙳F,n8*&YܤqV u9kt62 g=ijf ,4΂IDigqV 5"ٙ⬴܊>uV}, 1YP"]Ƞ18VqMd<CjֵAM"..89 y( yf@(xr%(4%89`oCm]Wyf!3:,utSQgeE(ǥJur/yPt&ԉRrF$Ӱ0^*VaG@b4>:ծ6z>Zv-kzڄZoび@lrX'Dad+;"7Oc%K5R8,|pg//ʝvG:Ԗ$?7ί"ImT$~$Qb2$yHۋ((CB0Q+&n)#vP0'#3ɪ]grTt?jv3u;m Z~/lܹ~ͦsjg}x}fK0L/A84B', ->}X Orq$2ؿ' y@uXDد(MnvwKC+ف(0kˇ̬>ZSCyq*հ"Jż10goKȁ0QȄtM̊=Cn+ֶrIFV'{1T3ZrǵV c0Cq-`s v_Kkh'#b&ԽB͘~(|‚|l! g[IX1Q4u4y()U*ob4,rFC׌28g΁9df'QYXK*2(`'bb,z|Dkt9hsHDX zRc7[^rd2d" "7Ub;r#O15~ij>ELx/hNSI3?ة8},O!HW"V]\\`B-XM޿3ooސ\]oF4xxw!Z%8֐snȪ#9ϔƨ'+R^`ɯ ^ϵuDM b^v$,nԛqZ/ w;gQ5 =G^1V8Hδ5K/9≮*/]U:#ζG ᣶.*?{=Ԏ!'f%Qn;wOr0::Ds%JXr>Z%]{Jookh =qՊ9b#bkbC{PR׏¡CK-QXR-ʇ)Dt^m_\p /Ut}qv7qz2iL5(r-ȩqvgzńGKQ}PKP{ѬCSD_bFr ërwݙDa⇻u~S׹(6M4*=ozp~|ω|Yٝѓ=%9R,rxXH֋khl>7:#Qĵ/$fl4SȫJ#WqV6ฉh4#1_:ۦ˙-)7KDCvwvSq &_p`f]EfmB b?DK7D>kONN]DyS:s5*6r.\ZV)03xN챼쥌uH]ΎdLLLjΜ5eΊY[pV}ݚ5QWXwSt |Қ67Pw(wkq9&I@0 IԧMA#%ɫ$֓ҭǗ2G$&Jњ9JIw7*}im#ٿ""1x#/Mֆ`$^4Ҭ;vI!lK꺻JyRJ基ȏpRI"d쿇:h,NQ5s F8SN2tuUdSh8F.ŷǤ<魁$cRlfcM_K{l5y}cP"{}co ]9jo?]b18I!]P$Z'c97r3 I(fmv$j+%ۣyԳ,gYPP0ɲ#(D(RPta}Rz Utw~*6;)MWggOtCK pc%x.mb=6I~]A>N7FLLZ/Q$H3(?Ex f9c\'?Kj4gVf&Vo~tsp `-V $lt eq=^Z^8Bֽ/87*D3:CN:\"HaKV\BΟ0RW# ʏFbtDUsl`C(*X[ u@.NT}H w<Ϳ#Ipgh2;U cFAd@9 Tf!PVq8FFxh3kdT\(9;L6?llV,E< Z_ˁ=CM5,40 8&A -ZZ[$E"C-F#3q! B I' 4*c3b9< a 2ְӈ3RbI&86\ĘIX +cGFF9DHr}n=Pzb ^|wrЗA}}w/ZUY¯ݣ.ɐ3"Šlz's0M=%onMA]޻f5E^(~+U P_n.$@&f|Tz{#`r^9=_] Gxz}z4ڢdJ3n1Pd%dbnMo:TMe*Zږ]D3Whh@(gǒ z[f~52ݟЮI$&^vͭ S" 5f=5UW~Nξz{5Y}O^f;+PKb1 D:X 8E7XH_NceƩ)Vs ֚)iQuT8dY"^ze?6N|*#H2 "3F E4n+Q[<-세9Wr)spz8H 9.LPc-9rDy Rx)[^~}RKW.+m(1D\7[W\-z|v[G{. :NoC-`:]=[pEj]tL&O&UU|_?_6䋪H!w pfv|&b 4U36]Gn^u[~ Z\^ӟw/IbKTgu!hRE5qw΢UxJayM#$Dw[nUq:UQDh}imLU/nupw΢xJ[MNnUq:UQDE(DTcҭ$OV|,)ϳ$)6b: t±5{VSx]sQf쳙?gW?s!soTݨO'PS幪nT,>vU-lN͓Ԭp⚹{6NHĹԳ9?OwlΟ@1g-ݰ/MzsQxri22]J0ԣ<\r?8 '3םqNO,pӖXyri 'Ygj^YD'9{QjF;mGēt (jϽ/[eΓޗmrFJ9լY qY>m=#+\ ]N#=Yf޿WdaHGyH<}ICYn /aI2uam䡁Ih$)]dߴME2&QjY!7mmpeR~+wT|X9D|T3۾MـDXvj8 qg?3o|^?*D(XRzgO˭%4v@J/lu\r3}dצ~@G5SBn&+3 Ki$5 Rȓ!G s,ig WCBy-|)IM"LxCFa֑)xX KLJFѡcEQ=Nn4典MUF_H!pg8MJ""ƃ8&&40pXRKTܬ( j (CWQ'rv{PKΕWj{dr=$2"8 MĔAE)ƀ>,BF0Zcؘ톏] s}ΓĘTpdk8uYrJ  B\P4Xw-k "3+¸٨|ޓ:TS Lɿt6m2 8ZNYgA+76e=)aq-+,GJBq}Z+P=~'UB7YdȁZ Xp#%A2rϾ eVZ,k*9w܄YL x׎g_;ξr|<`HcC,9:V14V5Yx B[ZSx $&ZE7 ]._4?oE޺8C,aڱֽl?8,NfX=b"6aH*,#$Pic "X+lj Ʊb&Vp5 *mhp€D1&JZ[`a1 1\WLsFrceDhRq[uW-ee+'Hekw@.2H wUj)o~yyK2`BD]]x nX$x>ɥW{ps* 2Ȟ ޻ A+j565$B\HCM(ZKMbb<\\0ƅb;S*-V{5_Sx[ؾB}VBqZC7y""oYO#Cl*!fb, @'5Cϵ X:k)4}$= ٸ[Wr<[u M溇_|8Bx9|kje?V#s6^vͭ Slo5=~_~NϿ~{5Y}OkVG/}?{3YU(_cS4$]m-v=\J:Aq2!rnR+E5ڙk5ٴClڛyp`qCxu _'C5(+#Q;22X byb )23R4b:% V̐=Np ',cP@;Ihoj9E Hm'X`Zdl/bPѯ/7>DWqJ1ŀ Y>}oܧ[\k(NcQ-^eg^WOLVl(Px:cbX; %\ 7Й> T&䪓ԗQMrIw%Lzm2=MJ# ݟL73! kw^iadE 'z|tO7;:z !-pGʞ3I"R5H@EG -L@"mQh4-PK0!1z8T֢qꪯ >8b5)l|%'kŋے2ZCAlXg73eJLeDuvy!+# G9)Y"MQ}񏂍 +E9n^!ZTX"[jȝ]5_MCxKc}.jqRW/6s^3.Y\Qǃųg+UJyIʣ e7djY 9K" .#}&c*G{B]́"HcR`q'u-޸ efRWhn+xQnD9EJey`ab$?m]ubW?TYHEd"^RVKһ~0j*)+@Q,9ըrP#ZXP1$0"kkT9ZnH蜹vl;w_<@2H^ٴ}4l\JH$w}1bONU(@R:2ݫ@->H w_W|M͔會9I\*7 L,VJʞ qV*M*"ٱs]UN<ebNlZ=_87խwվ߆i(g~l"9Ej\K{+Ce|_䳫?_65;Oj\G3Hz4ň=~ %9{^ӟwa0*[y.3Lj5qw΢x]閵ZnUq:UQDgwMUҭz'zOV|,ZŹ#jQUŁT}G֦Icf=[6On_|\/qa'8.L<늉tx ƉBy #' OD!lDORq'A=C;W67\gŐk/5+rGd[+w=+H-4J؊Xi󐄒11 c \y[fmW ѦyQjY4.dgGhNXre"(kb& yY(ؤ68Z[6 e&d9hHؚ#1&D5DIb E"kCe:8v<Ԕ1!J @D6\Rl Gp`.KY5VJ9Sp'^,p21d/ t3vq"XA 1 4Ch5Ln<<wmy9{v4}ꋁyu'AN60&9ol&@vSMђ\,,`w{ٔ2p8x{CM;cqԢeݏD#'  "5Z͈FM *(*"1J$Kxp <)0I]!DH((aF㴯q('Xk:YF}J:| 98ŶQ`̕qQ0¥ hƅd`2Y4HV.o~[$iL~mXon"|hf. 6+)bcA!8$,5H9PC@G1V :p8h1Q;cYɬR> w6 ]3S3?D5z3תxWj*@ VaD f6Z AүB¯.fJ '}I& R;*.mFTZحgӯ2ZE( a38?]/n$$brz+BK?gjp]~*VW~3LLY<?Uq[m,>o3!QZ?O\0PYc~)VS^3)Ei >{ h@ltWY2)`2PD=RW2yyE%J_o 5 (Y1( .p.Hz m7n.w*4u:-{w-RV 9ig4,ȿ'v[#X`iLEކb0c;5S[lCE藹'F5c[-]W" S6E󭨭u`(lvՕwHT0<|<ҡM3&*/뷭Γfhjxxݺ}5䴕jӔO_J;jֲR6d6^~s+EҪE5Ϳʰז:bYg6Ƨ/ٰOYt]#=*lY>B~Y:ih!?g3o,UNaQ"TctTT(p{;kw;k8kDka۵6uhj66}@2- #޷(+g%Qnr0z2M?֗Z-NyYj];(9~Gޱ%ncm{ǶM0% -=n n u@NfMnK/ޭ|m).# Q{nMu@N"vݚ_#z6C6B>!qUw = UZ@y "ɋAN*8ify9{ؔ<8 n߼-Z\o6_&lV{l{kpHms9;´:,,f8zZ+6\JF̕}#רj ku|Fu5Zf/0r&rv?q>7fu =8ǧ(n| N8 H Wc}~V?ۧuMm-?~M 'E eȑn K켕[c%[-L"Z iU2UU`$k£=H|ޣjQu{;"cOFLL& -c<2.2.1Y= -Qj; cz8xv!?ܫ$iM(e^ K2HM!$Os6J4N2D@k Ji96mDYFVF%.j(QQ( ;sB%P_&9_$lCs[=Kvwj0y Hk,Da&nHs[Q@("q2r&G!DpcDELZn5(J!"F3L$f?ṙ t?RSLPU`}+fxWPEjdN_Mz^Z VK:DbjU"` lE^!z (^MWF>'PYR+^8 a43Y(iʥ6d*';/iY CE۩R* 7PD4dR#+B YiȒy'3 %\jr{@q5y%ƖK[.UT2/Î-mΐlLAAVڎvs9C@aEaB+qSdi"E 4qdΊUF"" TṼDW \)'\jEW*,m %\j帯=$qIZmL"]EɊI56/>"] ЛWl^ܼa)5/j3kn^{Z"͌+8e~ Vv9 ‹/^_(8(4nor;_7nUUFfv(Q[p"R=HnbXٵ d>mm(3&25ZGiB6iʥbuRhe`kg Y9o# z^ڣ׌_%';J(ac׽ٛ7%JR#f^Vh lml ?핕v&U;A<3b.̲L?-V/ӻt[(fs)]䛘Dp v7/ExvX(2G,Zx8֍U\ 蒭[} Jc Հ1|L8e|̲p7W>&ݰrqs==ug?E%DN˛Kw+xh㧼dC璏9DUGͧ/ާK!I[-nޤW/d|#}csSy2qNC3 hp;׼}yf77vo"æLzl{_Ylf 2MFΥww[e/vywL;^~\4weFֶ=G6(k_5 B`hY+|E9-f>Qx; Iq^=8vFT06AND?2QE߄W4ӻ[p{'<'p^|kN޿{uBeyP:;xBoOneu P,_}TOQ(·klQNdA7%=V Ǔgm?mӏnлoRv \!4:\C&2J&]D6ѭH}ײdA=DMn[t F{}=P:8b D?r]2[z;FX&fotn1!s)×Dmlۨ):9@4S칐&Vwi^9vl?׫tY. |=:jn &?յK7y}RϛpƑ>Ҧt(D6A9E.E /8uqS @'10⹬XcmE뀌+RyZq7hYN|8FFN'fQҹr4Hf&}jν(WԶaCWaqX_OD5n͡VHÚT9͡6Z3}}xg[Uhְw]s5S UQY| :+լlln'n͡MbY-h:]MX}o$u1/ֳeJ4|q zR)k ]ɓEz&j0Ri*af#d{ :7FZ4y@ 6P5)]HYg[΢^<մizG?6]tBJcJ7T\Gp⩒4TsG|.F֧ahgutkC^8Vx9 v?~,֋8i, -&g{8Qwbv]lW N׎ys!}ijnSᑎҼ9Ȫ}ij؜Zj9Dh Yi9\}XU}q&Tn&O5{{m'.Y2剪L/Di]pjsR5M^tN+|F,U8~їcZ#W|`т*KXDv-95?C~2]yR\RB2q8YR|gՌri?00 /xGu?GmNp*,pIҰΈfC"ul9ǩ:WjRmhqH!F>/o\ITIȉ& J'IC`q $:3IPL3,; 9@'#N{Ɉ7*(NU sR+R3bp(Ε)iLal*t^zQF<"(Հk$HC3C*W2q8M̓RFC!4c@`3[K:e?,j(v견ur:"8#sR Q$%DĨDqJ'T˄0_ vUVUyp+ρXqZ؆>M[݋V-xt 5jT@Mr!L˝4 6`s!#:Cl|4 ωݫ~&u5N$Dj8& P#ԶNmȥDj8)cLRNY`P("D64LP##L dA}=ILaSPMpn1A vXi+e$Z}SQ=?j骐f6Á"9P?٪e{[᎘ ~rUQ 4^=9Z(X*cDԨIhlTopsJmOmZ a2TFayodC=݁ 5fw^RL)C;%cAa]G2Engם(hԳav:}K9 bh)߾׳Bf< ݟYy#j p,QNN 9^09O;$55 j|T)~rn)61i`ts9w.5Pw^ԉklCSІ _[a$E43"Z4̀"j˽݀$ݵǫ[ԦeڷԡV^ح-Y޹}K1s*-5(j,Dzjѩh&r}yUIh?ݝ#5ᎁ naF>[[5!TJUq⵿y(|_Z#!h~lI-*RJ uoMw/9MMmIubޯC A^Qk˄28-[ i~jl/B;P-X.}רS?`ijxg5ǚmp gQutCjݨ|q:kԑn=]HӒxlQ!/E}xŹ+ Cn8pa5H>ML>qnmp g O9&Hsۯrp_ w|.¬R˹OA=B-ykE5w|N@\AZsN@9ԜT1rA! DhXC'&!QJ$TKI&CY K"Pa̓0Tw)g)W*7 &<2&$2&mb 1 2$n'THցZ E%*" U3* [d$\XKbRHiơ :oAM/SFCcOC=B-8oFNYRѸatv\M.Dю؎qV?8+Zn hpRLAȿ@F׃s+MFdg8+ZR"GwϭsƑK-ԂJ1ʿ'nʼn, l)Ub'-s{_a?PKG׽̍-ʽMqV5FwO8z)"RA?P Iڏ]?$NՎOC=B-n'ƑK' BۧVF3n7PrN jF8jibt|^5U#gČs,Ҡ s9Ԃ1۽cڑJ2GV!ܰIU4Q !W,Q̄ќЄ$!I3MȌ6 $Ċ>`v )B&drqZȰKdB(DiUꞳ#g"`E9+Z#guYnql'08-1nPc8˭)ˋ΀ 8u,^<P%az%a(?9Ԏ9$?zijtYsP7Z<7,nCU9q䬮8˭5gO?Qttt|Ꭓ qLUPtKoy]O;>*UM|xOE4NY@6K0~L.h=Yx?ަ 0V2yt5 o>o =ma$<"IeJ"iDh YtxJj#2U{(#U;)*V:itāR˚սR~pmLHbo_MŚM ݪhErBLK%!s4m!Gpmw7U.ԅ] ulRʵDIj'4O rH,DŽ4L$axi]0z~NVwBݳO}_J}(2afJDdI 5) $3,ՌX$VYFDpnc|7BtH#tj@<'J$@S(Q`b EER4K4hP;}: G5/eY$XdWsl}qhS#TTQ)lFOua7+~)G_?tB>} 뇿3 )^-OnnmM޿{ueyz6T0#ߞ^}3})(l2 WorRB|RM\q.}YL!t,0p~>{'Î UC\;^QHwܱ!j:6^ȽT]jS xt:Q fo#ahGyRb:k=m>( z9 g_Zق-h@(S Z pmR-k HBQe8=p 8(*_9Z53{j{p -t4SMb˶I^^\x[{>ۼ }RZjpuqmz56٘fovQ+ h6pm4Z~(h(05< \=Z҄:d. ׋?K-!1zJŚoҸ Wm0먨 &0W./L֟nv|I1?o_GPʏM6A9E*E* d`篢Ҳ:s*VDaDf2?nИсp!7Bz5aӉr4$pZg{^PKƽڎDa)tv܈!/r>BMY9$#M hzV+UwpBn=y_z ` *yU5ZRaoZZQ4s59 $[.n'FKhR1wa ZMW-V[=6}Nj,Aje͘R$)_|8F~<>|( '>`d*Ҵ=C+#fXvN Bn?/_SJW%R8䅳O-t:;Ua.F֟eieg?Q!/E}x 8Ź+ݤ]tӋ0vgBqnmp g O9u \A9%:] |wq2mO`MZ-(Uzg5)-nvH]S&u}&T-D὜6K5սCAi1=2NĠAGPa?h*SxWxȦ;mlgCE:0eMcE/oθe_4|эS0! tX hdX\Q16q#Y/I$RiYOGc;d?ՔlSd5E{Ll.Jƹ" %)0j'\,|Z{CUCK)Ԃkc=6I+`V@l֞uAW.I2fMO/Gu'6dՓq4o݉:F[֍vPsB]G®;m[*'uPWhQ5z cb˪%GOU kGurW8[9;.fL OVh"Żk\"T"IB:qe8L;MlL9ys@MvТL }]vg|,c鯮/twug2\h6m8#-1j(m{F"H\ҫB3K̝CV8nV,RhĩD,q&O転H9L! icS;5F~X_AL6P6-@X`fs7cՃPgFYI"mDu4éܢYJ(ՅoBJc/&/{xS'w bLV(96l+@C`d[de9ynB@gmfaEA˲\)Y u=.!穾S-:HFP~aLz^~:⹠"aZrg,ŤЏJsS&aR6; ,kL$(@'SfRa<$Er+,υ5fxX2_/*yJ7зdSs6q0D%vkXM6Pv& pkC};ڔ/ɪۈq(hQ 8uCLёԺMP0B.ɹ6'6N,12A(0Lղ^%3*VgE::*J:H ˬsS^hS䦦 4A޸ ;T+ƤOo"Ћ oQ @0a*c\d2UYÐ%W-x1? CbQ:Fba*V2ewDV?2NsJ@T_֩&'|@Y~ ! Y%FHY"Yb2ՙ̾&j1Tp7Y));;{^$++XztHD-3Z"$o~}7ֱpDD?De{x0jQK6 ?hYZOU y(rLt=&bIaZks9n$W 萮נfp"`Ơ T#Oց3!,r&1@aհ9!tn,3\F#Φ*ڂ%EYRmu4[/DBxw(1EVIȥ 3 JQ$He`PfIƤC*Ls3Fye89_) RX #Zoԛd rwWOsW$rnjjd@2L'SJ ߪ%.%+%:-f9'sdH`Ls&II" hd(hnю<ę3:Wt\Gp’yθA0$I2 mV&GQr!p֥"G}W FOO-cJW]eJbJ>R\{*WVnoWoLLǕo|oZ-p_V^ \_O>_|# 37Am򻵢z}vf1P[3͚?__ӻv$hK(Et}?|>wfƘ.)-ئwX,؈x φR͘0NP ΢ :PٷipUV^9_KkGG@ 5pW' reZŞjhۼQm(Zss̎`%LAģ GF=f*ig$&z>[.aJ]-P.Cs3Op{5@+p1hmUgOOǝ@qgAwF=ЈfҢ1 ku{n2:ZH?_bY{w) G P#NŹj2E)׻tTh|`ј#܆#@+nPא{yCѢ2tp'3ѶCs{ 7d;.G'䋢G2h;IG]rYwyթvbqy[rϖmfuf]97wwvs?WS~+jrby{֮7y ?|7+D.r0TóK/aUt ??vk#eEَC4 S(okM1^Mp8vAtbD2؛vOr n] +h dsHvJwAtbDe7_ݺ@W%#pkU}=]s;Y~*CG'amtqv{8_OyPx|!pPv" 0LTjpw7'̾aA9V_=fYS}Yڰ6 Yiq6 Y%0; C^FLYR!Zs1f_Y&YfJq"`DL 8Hma7ƈAҰakQJ!Ka Kaj, HLY4 ̫)H#0YpYm!L@:F#}UW҃@VX_5YR!Kr@sT_֩VZ 虬b42\xtݿfLRܿfqh(.M$;T7q&ִQ8DnUPԳT$;#_J,lbgڧrɟn@xFCpPsiz_jfIA?HѶ; _<խߒ21!F/?}x[%ᾬߩ) _O>_|#32d-[gާf<>-fo $;Y3kz_خE7[3˅t/5JmWU,6U~k؅4Yjj ~fb+f9֮=g3n̾ē/p!c ~T5FG[mFq=ГpnGDžJ3ue<-҂-\Pp xuH!eˊhѷmjѱT )˓T|S-C˃%h2_t[:] +h 8Ӄ$ A :hNX½M_ j.C4S=mbyo4n$*_:91A3{w{yu*-kۂw_!!!0L%7či5NTj@/ {2F?/?D W3oe|\U%>r;Y~*A~ptԨ[|~?}Y>kxDA R`VsJ*x`;_Me20EUNeR V(lJ-Gr6 %JCQ6"k[^Pv ]ZqaFlF7@`jV3@:*7k6 Z@a,XB%(h'm""G2%&K+LX猫CLq+w1U*AJV4c#+>]vOT?!lmXp|˜͋^K -_wψHC؀!S͙I3|YNqPqA+ҙt/PyYzvwo^f6bkWnZ1jpKGkWPl8gX ZID9O}AIa bH,TccMpbS9-UC㟧I5HF׳s7D )by/Tc4-!Q qN@Zw'%yC":/QQ+!5j ^6)5XʭZk=ٖkۡU<"Q F$k7[ E vHl35bïg@D ECeSQ\J#`̆d h}kQHa fiũ+#uL?8hf -]\ϫڵIʃ_Lڵ8QG JG+U+ &IDq܂ً5.' UOKE֪xs-:Q5?J+{kv>!Îm}G8& r-}B1xԹ@!Z*idv'Q6͒4- <YcnX"xOPsY챠1JMYzVR5gVޅJc[4V%,27)H&<$"&L{48Ze΢He&˝dQ7 %rj<T&F$ʠqgHLԸ\0i!DL:L!(gIC1!;nZ,X[]5 Pd%ylsέ"2PM`^HV (X6ΫHT\ 7휑H&ׅk}nzVP QzV+w&TZZ"gN|kԽ}p 5H]XnjwӭglB gۂVËh8:m0"H{Bi'-_`80 ѧuDVҧ 賭/RrT yC禴y;hǞ6uʷ%kudo7ZV|sڻ벾6 DíNkUZaڐ x2=ڵwk'T+| x8'x>Q/nzRߗ􇊂4>ߕx|~()%eRCR؜>.KY;_X=`;i\?gEMelGUN)1=uQhݺ:]ƺ25Лu^ MnАW]tJr;xu|[WPT;XC"@c[ D y*)b1$p0fHXsH'EDL?ZAZ0m*A kXtQk>jdvD[Zqrkɻ ppa们p(8 lȳc;q_ 7E_d^~tj[4+5iluw4?\Ͳbcy[ܫ<;=oӯ]U"ߵ~IJlh&^\) c):iZTK!L҆׼p){2 TG?<\$|k~ ~x9.ž"JUh8pET@4cs 0?Imqfڃ瀣>KRf9/TG׻usU7 KE5Z9A1[u\a? f+ h@Y!Ql y0 jƎ\kHVAر;wOdU$TTTbGl QK)`I/_/|Q|Yb]Z2JaqT:CoUFĝUNc';kq? 2IN@oh=ڿ7eRI)a@4+PmL?Ѯqe."?P f5Z1wе~ƞ;'r4'JV(IT vJourEPJadhEՆk3 3r+>z&J11пfqicAhfYB4K& 4KhfQr=ff)g~ay+p6߾}M_-VlX@+w->U=Dg˵SdAdʼ#yT/q|&&9RvߍFp}9?$ҭ>[ֱ0rK$N%XQC?Kӥ|W~,xg7/YYc9jdu2wQ\Qdb2@-b06SrfmN=V"Sf*#N<^6.X@ d/X^ҙwa!2v,|Z-{ѝB™ 24Is_<8m y:q9κMY^s!cms{ɾjOc,Z~5hqUM~oBwdJbSmgo#d:ID(Rd c'n#rY"L8rOn6X`>~љ eB1;ɐky(Fˍ̸conP<3H4Xw࢓l'gq*r''}&%k5z.JwK/']3/]˷N I< 뻋i#oNa\/\ob(GoO/ )i%뼊ke6? B i; Ȇ+ N'p7O-W*]xhYnA{p)_ܔ*_{hY7wBJnni#}{Y^ x{r?G> :/$՝wpnu-7W> 毉uM>}%̶ލZVĀj~vyo.>LBlt˥)CdLj{F8mLS}٤˃// ޗr6G6!Rk mKedKNX{4,j_&9Y,f}gG8;˒R<+,*Jt򷛛믞N_->n7?'?EW 7; QrG.d<'%Wv+Qm ꁭ(ޥ$Ʌj5nfeۛmvy^6ul>#stfN/n^Yg&! 9=)ՒJkיv,B? w/oB¯;Z{EA.mt0a6ݶ{TIϗ8O 2W]{yM-͍vq:nL-6<伙_2>r?s69oeYR)KyWE$áًi1&nQ\|N_ =e/?7ښt3?,duD"dIa樜gGn8e9oӯU}}C27[/N-C2#6,}rE~qU IѬʂ#Mt)H3jdj oJi #Y4+Pfk$2lXѸIS7VL%7]o4l?n |}NH~fV|]4zyb/v U'Z2Ze;T$DDY[SX#Y%"[ ECLj%ȝNy?0[?:`'_^@e2KT1fNiS"b{^md] /iCaJ o$5ߎ\,X;.3=IDGZ.%pjN p^ s2@hz U)[p0hh=vL4jm8>>5~;ZGQFvIA,7i)0G$NPb&8؀+-6udbrO 7lw/ݶ߻P2q<|rUG3qm"ϬHyz6$gZ'ׂYj'l$܌i>r> ñPJf 3 r)ITq$cZ@,c][oƕ+¼: 6̾0Hik-{ƒSd$6խ) mh(ԩ:U|a4`WC^z[q.PBO1&@ln<$DzڡAJrTj㕮J*wVY*㤀&dN<*qb^ {LН@32'XFfȸ/,lT7!J f(!d%q7 u}bٱ#J!d|ƺ^Hlx)w\fI̲$hN8Cfd * 8' ^ètHyq8Y_3ޣLe<5e4Ss[6B sG{&b|yLbrpyo. 42OYimxޕfxjLyrR4tT¾h-4)\gbB!QM޶[FH!8'LE$[Ljũ8k=qhmbS}4X¢-tGn$(ހ nb v}G~V)CqppԂd/ /aЛD^azB6??tF2 jHa`V ُa6C$ 蠘8L,Qm bBg7RS/ ?ƈ$ぴ.%Ef(s\hiEJVZ@<'\ʓ D.+,Hio \Y)3[ԙU9Z| qFEhHz{I]YJ ;"Ya+-,h6 kdYYHm 咑GZ;kdY.dI -a/E,Ț= 2F1,Fj k5f,¦ ,jn]Z#kȊCs ZXeYXDd,Yz(?$r63rdIlH ne`7RῳF{b6aKpss]/M7(jʕ~c|A'V?}׺sV bK|I"dؚ~ G*qŸ[.__)}xi][NRp8eLl`.Kk4&6XUgwO5 ]O5L?˲ (]}6c{[Tl!S ayYɟF_Oe 9OxxoLiB= Z6Vk`eVlN|㉋ŨhA|(#%D$ȌwMr|΃yw HǺ;(gy D-(\ UB3=lk aZ%VrTBdFk#,d }幯.T9T5x·VL76Hv2E$dH9ij~;GY6G7o*&#Dz}㱗/P7cdhax=(G5} WǮ0yҙFSÊ=^I ۸R;2bb:7"AK4tWso ,о`dr>!4S{.YυI.ϿĄL(f>n%h<.pnTZ9G{gn<|jv~%g-`h>07QxV]xzcYU֚ѡ";a+" TJ ʹy: ѡ)AFaRkqV+GR%JYqH-ZVD DXrWttt#SC76ڗwER}T:{*18'WZb,} L 39eie!9UVCp#JYm ]x) X5ɨ98 CmLn2Ҩ*cNY([¿h\@Tu9X* ,4WckVg3ԣ1w/w 60-"H0#"nnb6`?4`U gh01SE+u+ uwDXpUDv&KzSOTl (*6}8ZZb>нD1a9/37E)ṯzh?~v҉o~G;;*# L|%y5K&2:G߉,Y<Z iG);+ &ѽz#u8kvNiAkl봜crjA~):lZkԅ60RL*v#c^1k1̰s+t J]RIYJDR= 5f'ּj R*`EJFyh ̬ҹTBV T8fge1zDX <ưؑ>z ưwqp55㑗@<$݇+|5gO5;S YI>"ݡڽ>j{fFo'#RC/Ԣ׃ZJ5Cx sϑs;]t,pXYcA"#KRW-!DZ06/NI&+Wi:ef⽟jJ&RR;uq[p;v,x2KXm-ꩁf_m!]kN<.((]i5 ,i=t^{<"܃16aeb6R mb`̳. g̢!&'3cZk'DlC|`+D^GLGh&E `y{s hS6FetiLdڵ.l]rZғғl1g\xGvQm =UCHgn.ʐFP~S'0-H?׷_@Vz01J WeqtVNwݷ?2]TTpv0qw}]ލ{߈[ _O ([pf>5Ś¢}e+kJ *pf˴븈:G$!,ZѦXRu%mrlHA.Hׅq#!rq@2{##n@V܈IX~1dBVTxNy2Μ٠|QfhQKg lwlPk6ĆL*m ڜ8'\WYs+aE%o#QI^dT)W(CbFKB E-_]BE;z6;?_g7*kL?5#fRQ LЯ ݷ|ݯ7&IȜW4D6C}G?~]|%=)s vYZ,Ԋ9J [e|GΖGvm(vx0.\[NENX# |گ^gx5r:9^^癕|3`& |s@$fG(zy x%Km\^F_g ֓Œ]͓\ +OUeU⭯ I&hލ ٱ ¸~ PX]_zuh@R\vp{C?'ޫ&364 ۻ 7Oal7ɿ؄?>$uؾĶAq{qm )ZKU=1YDRdUUo"/EQkecW^T'rŪ. s[k!eMx':"P#!^|O_ TiM0LΛh.Jʔ`s@t%;yRH8mߙX,BЗ)%޸b3%w ۲P`Ksɉ%⦉I0 wnVla1d8̆ؐ9=%Pw U!!w<BC? Y)]rr[ֺhp^xk@0>-+KXړH.EY--} ^{md-i2 YYA،sugRߺP0X<18s_hFRXL:X3i t&;(D)-Y?x` [2Lt **lYfې֕r&B2 \3)F5!FM#80'?χN'(<?prx &vw²oœR^\\z8TEC9-e9z_ j)F?bo;G:9x;(Ҋ4qP ͜XT/Q #WqGnb%Q|0|B='rLoi$!f?ywE) >}! ]jivE [Z3r"`fQΩ7`ñ$ 8`('FЧ602.rbf:w^wD ܯ_ՙc<=#K_ÝdxzbW>闘5>O- $<Nl͟{zv/c$jja3`<7?"XN1wqF1f7f-&5ƙ`I4B$v:2 W&NS>:@ۇ]\^isTTiF ^ Ʀ2H^Go44@%MàAQ6?V8C_!4($>ә*][sF+*n$TCjNr2۾$9Fc$9[率=%ˠHQԄyD证\$Q:Q"eR*2,?ş(빿.uf@~a\WrؙX969Nr5$\`W`|3wQ-o~p ~y}q!Z\g՗ ٓw_'?ݜ4JYE>Nh'a*a_RFYԇVʢ}kBDo bVH{#|IKm#˿&cY.u:BT[jTJSH&'`)h&-Fucf ؞jg @pa.V|U:fn|j<\'I=4]/͠&\JSj6I+MkLà~w'ZD+JTQ+&y2.-Ki5=C6'=M餇;Kl#-[mrbKtRMFK-5$*R"]e[OB9s$!DaSwHOugO,GShTBUXit>wz] عWXi:HM9w]&KSk7{%ry'5h,2qYVS V0 89|G VԇCゑSZZ{nQmKGv-Q.-ޏևOS>)*+JGs2̙ܭiq3Thܘ!rϫS.;>N}OCL~mn\?*fTv5{wnYe)kIC*@ c{iXЭ)(uvo_unfѭ h#R6wBtk4}Gt;;CֆU`VZJXlCUΫ |̈́ẅ́23LXCL(5?e5}TkCZ /J3mznQmP 1{kf}:OJ5J;k㒦U.mYB^=j wD-“ CT+nT= = a!ѳȩV+F ~炖CUOJ5Zk7D= Sj z:OJ5=ӴT4JiiNuWZ:ߦQ$jVN0Yfx.w$Ҵ4uYܳ0o$ѳg!? jiFmJO@Ҝj,]2f!0.н}TOTOi0?{H<ɩF0z<)k\~Umv?xMK9+nR!e?*0?NĿ"G8r=0"wLQKs9_'4QM2l??U{G:M??ZcM523t@YLbYnxJZdm#KCɞwjD=_ĝ6TUCss?|HijPz5!1יp5v8;oR&3ɡG;^St 6a 9{j_\]Pc7~v)' 쨱iT%KE_6zWNΞגyҽlSn\DX+YN= ^wSit߻(zx}f;vD6E#]3'c&c33fVree&~v>x6 Q&'-7bic@RBicʓĽH݉)lxI;e7RB\,m%Oͷ9>'nGY*ޅ!B'ݟhT{95[3>Q}bK 3\f%R_FoBOj-MtO7}.]辸,}n!2NIXvՋe5T>r%Eϳ3v){?к]B-W~ =Od[̲?©neLz[:_/M}AgY91[iG) fҌyG- an IKѝʤ߭-IPR2x)o$x$ ,lï߹"%|aUҥ ^2ۑLHB{5rQ.=`dz7W2Ht Ti˅0 LE1(gX~bfL2sL E2#C1NFĔQ42#:1 5L^>M$pi\K8ʥy,c5%~1T[H&Ja̧DD8Hm@ycRw-nD Vȹ4 3A6a6򥹺!k0, )Q*<&_8k폼TLȩ7FY!,\`+☉q0~a|iL,p=zH2Ō7yrv;{;x'̿Nq{1[+LlxBl&hoh|Ktޝk]AAILrJismdy2ƩW"{߫6-BX L?7,;jd,m~ji;*r; Jn+4)xgQ+}6w~LoJꦯK)UҤǵ \\F^:QfYq%2b:0 2!2ŒHW}:,zn?-ˉé9:KMҢ䏖8ujC :4  Y/YZg@0 P6f"s2ՑNcR X&Λ4 ;&6FPbgSW5tPLnHAieG2';F5UL?Zs'^tA {)ce< ALsak8O7yZ/r[e#++YGGY9Jм{*/)d/,$fbTۙ[.#_xn53xoL2l_<-OL[VBIFȦ77F׭ԽfMXP_u.x OL$mg)N)-qz 3Zs?i!{ch̹HBnͷ?V qJvgg=E {4=TCzś##e.Z\v;"֜g ߮nQ{e'q_.I{蟷׮:n⏿?Extxwxކ|֩+M:ko}z#@w\ps+^>\wwOҪ2-7V5] T.f]x*E%&9Q:+UXX/.?D,\2R$n]}=hB2}({~en"kB%P[_TyuXՙNK)iMbF' #\ݧ" p5[Yd|0OƦb g̒|œ3%2XLRO+|X@7Hx) bVRuU*g䟾]zeR0qȆW~=ɏͽm˧rWjg QWl+Gl0%q.|X>( 벐#$XL:IC3*MR! i&3 eQA*v}\bz/gty5}ѫQpDV#`aZoj_#/K,m~Qa"R&lmA"SsLs5bZ+RϝXD∫s)͛@YM,X͛l^e7MWV $5\MX̚OjЪPrHM+iʼn2qԝ{8}-9v: lBWPlrY_puPU:c?z:gti$xPZu?41bt"g`ZHU3]kOt_<sB]PĮṉ]TC9+Sg')Hh/2s $ PڶsKٻ5, Mbe&CE9pd~ejyVf2 &q@D;f:d[& X~bN. JaDѩ53(T :8C¨Q%TǑΔNęgcjEbnֽsԘ6bT­$Z$t+#K2.S*Z147gSpE)Huf86 RB,}l!~}粅 Go/ط~g,?ih5_^݅ o8w2)By2-/2 M e.R&ɄISPIE3L0UԫȨCFcfVYhVϘdō5 $cEfPy Ef4e0ҹ2kc%MRgV;ՄFn] 5wv@"4(&k6ZP֢Qv? s0Zz`, =LР*mVԄ{!7zԎhh`kVVW#O1\mVaף#^X` (`)Zޑ8Aj?ͺ'Wm8דDE歖Qr2T䏜jݴZ6eD'!7{"#jgZyS?{WVΣ_对$˾7PqՑH(orVrl%D#A?3N,JV|ެm| p߳tnaIITMQSKWځ ҙs Z-oo8~ՂySw.m~"exV:[c}tNԘ)g6z? Npjrح~vl\CKA~%Okm񲱟wڃݥcLX!,ܻKx4='ah{t 1{ KrIpE1s4O.cl~dwa@J1[J<@~G~\ݳOzS>Sʡ扻qu otā+H}`#7-=_zj :%tmN(ϩ3hcՔOl ()8Uhs4T%i5'̚cAZ2[ǚ$/S063g\$L EL\jmEOsybS\$F[62{fK;el +r#Qҏp 3u&_?ˏSFF vz#y,,M_Wop|7Z2,K|*.BY plUv9~5ShHi C88%qM ?/?C:3(iミrnIV6[e|4R3mQby}Y7<Mv%)V(΂~-Mc8quY4W!K4N-CjӠ3F :a?pܔ5fgq@y}jl(4 m$βDUu8 R׫7iNVin.%cX 6 9bf""im-ag} )DY񜒀@,V8rWyBdEH6>0WSYfA &CEzn $›$|idDvox=Oy}GFBM}YRk(zɓsh>6Y {2T|k{(-Nӄr?}L /Gvf3KVKNUl| S2+!™Gj#-$3$(Impݨ}VML^=(yBt]/y`rJ ֟7+N.#y|1)fk*ǥskIQ'a @C2+\]PnWdaCns)p $umFwk[P".ۿ!ޟG{C֞*ıSb/Xth|G+Em"BX|VBrJV%0ೈd?ӈxuvpC9xTž$YFըQf0bX\dzB *&D׬SqIƳF㲨tRa:S٩Ϸ',zE$fuF=Zn"C,o9BoO?;u dZ;@SU_Y57׫x^E]VQNT$P@e :%zs֖=\u,ё3' FB9:)5ک"Gc\ī58`~i ~1i#Z3 %0 B*۬R4h|l%ö1>gi$q{%ɪw[ n ӽמpdC6P'ޖ~3<5^mҔ%Q=PbU$=Gڲ;IoV2DR+=}4M 0@R_NrTC ^?c9?69_U.F#!G3◲x\^IZx ln͙ zrv2KΞ`yD'7^9^bbaք9/8)p(NirSf<2o5SolʡC0% Tw^G6\3崙z K͋uo/V{#N]" b|w:tO#x4KvN];N3_G4R[oO$N׋zl^/ FRHo^˼dRBz :8`D-)̞j'f`ٶ-H;14NO?vv/QGmOB=_?5:?50D{1a<3z.sj[eW=H沭6UzQCb4sKؾ^y,5hJ2USԼLvMŦ}4pe9~NW~:0E çA34P737P{ffL'}ęX/ίzO{q-^sJ2/xY榵Ct2@JSc'`3LJNxЇ$ S7,yP=\ \͖ }/N G5nKrvL.Won5~8@N&Z>7v2h=W%_):LDbR29JRF >ww/VWC籄&_6vkQȂ$6y?mRПi+NhIڋ0ע?V@ԝ5)՚NުN&}N9u#V;9kȢ 5pdO`ȤόTDIALd/:8;AC..WLs%>} %?ap q*jKAcߐ--o%U{ge@.!8T/L8{l>INy)1F(.~G>ΦZT5 ث$m9ݵmx{lf/َHH&UY*4v, \".fmd%ީ*+䪼Ybj&7zWg5Sg" ܐPO˹G:_{Sm!<m#LQWJWA:SڳBwa=R8DX& kY73[z[`$x*u4|rje:sqy9IlZKvZ>c^eB%`z8cY;Y_y6 d6I_QTJ,O%~+9-ߗu$G M6D)%O x-dQ~h^oSPm4*O@"}SSanAg8N@m[_r喨W]dO8$( WS7!HcMc9UM "ΐ'/bJ2$ Z頛 !7vI_Oxin.HKƂAI0j {Ƒ 6M"@'IW1oŖ~g(iDQPCRDp$z_UuuUw]@L0 ;yf$c  ",D~fu%a QN]+wI%.DPJ"k@?7S!HG4h!UYV hւ)*bLs$J0yMT `Ef{/:_ `G7lGs MD$˖6!,DnHH[!:`k3`t%4 #}3i %a `[B,mӽ|uc|uO@mEv^_nXJB'W2;_$HiBaÇլ`pґkI Xu9EW/!#?!rr 1]]'Fc!]5 kipMuanA.6`.eSXfDnν0C>_5عwږ֔,j: ß8_&/9I Lp(FDId >/|TS{5U7̕$nKVX>6iR=0 r|B4y@w)tBxIPW%Juoހ\XUJ]7ZԬxBr)uC>_8v= kZ" ަ⛍Ň @DFB;& 9pБW~ k0ņk_Q<|ǃ9'%@ P"ΑWTfykQ^k iOah3V᪖`wxe5TX݉MFyqE'KtNwHl +&4l8f4ZWN`q0,|mgE+-#RJR0uskXLy.C/@úXqMIDfms#D >9FI^QAD9κ?3p=5 "NPopw[ B ?ba"_,^˞0F{<u$E&H',˵)\؃ECb֜cv܃misʍ XPflpvnƋm;3_z#ܝA]?֘҃Q'{2Y1d[x s 5'b$3߬+r@?\,3dWRXU/vyjT0N0>|oq޲,j gJ2,{/aZG}Eq8?Ǣʸ/-RsltÕe%kX7XDZ+wy8c 4 UrG9n)uy*]ׄ} eOB*'2#G{6{|ḊЌ'53wҍ0;ɉ+Mއe˶aosOoC?H&b+'~Z|4#,5^'i8]O]BUD+OZon?qg<qi1]2rDTo[+#_2+${57'[&"vȦʖ%ʕe^I#/'k-& 8kVxkBw[Ot*Ua\Y/ ?;[+-okI]d.JB!`e3Fb<&SuN'*i#NҐRt_)n8S܄R'J>RYou O__߷$S htsotCFG!XaSDA&Γ-xo:Fa~I9CFS[WϚB:Ac(rPsa(`R˕< (AKu^uStQW=+Uj@Զ.F& IǨ6eF"`g&%qVx+E72EdJ,v2Қdl`u? w1 ,K/_~Y }[DP0lO¢tJKXVA~~{5)0*P⏾~윞]=ъ`Vo4T"LQՆgN3lMtt6_2b0gwfۜhotH*8lL[;_?^L'W{04u1Xغ2CSf<4aXVQEmA]3(XԌ3jXBJKcc%:,wm%:phe6^4%t&9vhq6"cr0p M$9H&IoHv1M裞!ǞpmQ!EFYNz\&.ѡYɛ5 kX4yybЗ.σuF`P%M^rxqBȁUF4TJN^޶Tv8_P(<\߰ TJ_2Rs| 2 ,Yof\UHSx} {X'rvUsv\9-~1 eaOyyˊ@MBbT2 s'NᇿǓ GpLq$Kx|p@{wӉK=i 1:(y+)庆U$va&~, ~DHWɊYK< #\ fCTa@[~)$t W2$紽i_M2VB6`c0L 2 ž'/9e o}#PuR3-"ڒ4叶# \Ruϭu{Eњ wrr}hER9|Eg%dS` ^MFAsW}4>v×@8O飩. A . ..$ C7@)4D)4GgT/@}x&${̝9DjvF'J9@2ͷ]4fnݢjhgϣ9/[O#H&nul` 5V7/BS*ylYKNVKWg-sG5lty%ږ$䕋h#bSnPB5AiMY_[%sڭyyM y"HzW√d'"I -1V!5UHBJ !ACr$DGq)ơ%6D'd::Pwq~P K' BQq.sځ@t`DDNxkcC$ܼP#]7G+y5h†׃(B_F)Kͼ&n~1ԟzBʛU1EwWCCk0WJ<-^#yYf+%TL/L"B3Qճkrj.^~Mn0[.7s/KԈJ+/WCV6In {?|"/:xaT>8ȼe8t}~SɼPa^JB|yoCޛGt)e!ѩ.Qmf JFT#xlYPUp!Kg?yYD@' Kv %kF*D9%Hƻa+Q82x;TaO((V@TR?v9}˫A"T`I0qFhⶇr$#D$PO fSw;zhu!!, !o6_\*k2pvJ?!|yhrtϥ 62LG٥_|fY˾૜ٿDmﮯjPْjfv a:}XO/ߜw{wtJ˟zd=U ~rt0_J]0rVٕźxX;Fm:1]~L)׃6~Nn5}W&b&}Joةd,~ ߢÏ~aNX`a^[GzɣC(bF+RQVhw,dOo0=TY ھh7A1։kpy2 ͂,oM3%âDEFT'[Q=ަZ_= )tCTw6OlY,9>B"b1sR>}dHd|ѦȢʩFjYL!۸e+;=XMyt6T(y ? '7fCbԔ5rtdahN3:gXD(sw Y0߾}a4O>LtqEp{n1JV B&q4 kɡhCt2QuvNj1qWDs)>$ % ?{\ P"gj՜kC!;"Zn'Υ_vi_jוֹD$5~o+䌾ps4g0C0?f:`ã7LEFr$!H@Z sL$^2 wPηBnC܆6 )nhX,zBaǿ~_ŧ=D gwnA)}!Q(?^uX5IpO$^a}x11Xx =y_2)Haa^+Ua KF/p]+`sBGJ# -9 MDV.]9‰_ ӓYpT$jD1zjK(A%(HB5uZe3 144%\Bթ 3l6D1>;Y: 4>vKT̪|4i А.c3BR2sː֧Z 30<>ͷB"(8e'bJ Єc &͚I1ٶTׯv_%kfiBe/!l.kb_b9[Y ]d"d=SM Dz/U5|m_ĺkUZwzx_תnYUEiۓ7;^iG7[=ݺhJ*J̈՜5dU\ YeH^91!e[nn~}M@S:;^~62'q(p8[Ss |5yIOMWd kg`OU;zQm_{<ՂB-9$2.64{p#mflA=7ly#K }<[].~NJ򰠫lGѧAze7Y ) VYdn|zsAnߏ<@Ý~(ruSBp"foWͻ :uv;+Q1КvWݚ@Zv^(u:v˃*Q5VP/R5/%לw|hNGJt"{|Hщ,V Zz>,V~E(sDJ|:LxjUh#5dHdU8/5+5d.M_jrR~Eu,[*=Dxj{Uھ*QuS Ή,㐕S_3 +~REAtu#]ɒtEYHzL4bTP%Qʫ (ayG\~+3_ePIn=@$ޥDhˬ-8H$16sD ( ƐL(&MuhO`yP֦vC}66JF=R+;%J kcPP%H hO)O3J9"E%[.MKwFB/\qF}yrUFL=BQ|)Qsd&SaMvcR˧`IRH`i$paIFD #j3h-ΨkT P yiκq"9Ղ~.o"`M k"%4U?a"Qjm1ݸ&j0Oo[d>cBw@zMuW8 KޅT3迸+{X]~seh'_ܕjP #{8*S1ҜjMyxe幍 #-kNFU뿸R/KQޚuBzM2Ru4Y@cLuBzMTU?iY9_OD #I.:D_N:S var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005202307315144576134017711 0ustar rootrootFeb 16 09:45:41 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 09:45:41 crc restorecon[4679]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:41 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:42 crc restorecon[4679]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 09:45:42 crc restorecon[4679]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 09:45:42 crc kubenswrapper[4814]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 09:45:42 crc kubenswrapper[4814]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 09:45:42 crc kubenswrapper[4814]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 09:45:42 crc kubenswrapper[4814]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 09:45:42 crc kubenswrapper[4814]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 09:45:42 crc kubenswrapper[4814]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.734781 4814 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744440 4814 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744480 4814 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744487 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744493 4814 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744500 4814 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744506 4814 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744512 4814 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744517 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744522 4814 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744527 4814 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744555 4814 feature_gate.go:330] unrecognized feature gate: Example Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744561 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744566 4814 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744571 4814 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744601 4814 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744606 4814 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744612 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744620 4814 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744628 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744633 4814 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744639 4814 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744656 4814 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744664 4814 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744670 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744675 4814 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744681 4814 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744687 4814 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744692 4814 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744697 4814 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744703 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744708 4814 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744714 4814 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744719 4814 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744724 4814 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744729 4814 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744734 4814 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744739 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744744 4814 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744752 4814 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744758 4814 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744764 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744769 4814 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744774 4814 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744781 4814 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744787 4814 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744792 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744797 4814 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744803 4814 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744808 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744814 4814 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744819 4814 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744826 4814 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744831 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744838 4814 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744846 4814 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744853 4814 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744859 4814 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744865 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744871 4814 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744877 4814 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744882 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744888 4814 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744893 4814 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744900 4814 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744905 4814 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744910 4814 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744916 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744921 4814 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744926 4814 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744932 4814 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.744938 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745802 4814 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745823 4814 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745835 4814 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745844 4814 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745854 4814 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745860 4814 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745870 4814 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745878 4814 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745886 4814 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745892 4814 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745899 4814 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745907 4814 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745914 4814 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745921 4814 flags.go:64] FLAG: --cgroup-root="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745927 4814 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745933 4814 flags.go:64] FLAG: --client-ca-file="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745939 4814 flags.go:64] FLAG: --cloud-config="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745945 4814 flags.go:64] FLAG: --cloud-provider="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745951 4814 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745960 4814 flags.go:64] FLAG: --cluster-domain="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745966 4814 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745973 4814 flags.go:64] FLAG: --config-dir="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745979 4814 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745986 4814 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.745994 4814 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746001 4814 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746007 4814 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746014 4814 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746021 4814 flags.go:64] FLAG: --contention-profiling="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746027 4814 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746034 4814 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746040 4814 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746046 4814 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746054 4814 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746060 4814 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746067 4814 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746072 4814 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746080 4814 flags.go:64] FLAG: --enable-server="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746086 4814 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746094 4814 flags.go:64] FLAG: --event-burst="100" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746100 4814 flags.go:64] FLAG: --event-qps="50" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746106 4814 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746112 4814 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746120 4814 flags.go:64] FLAG: --eviction-hard="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746128 4814 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746134 4814 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746140 4814 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746147 4814 flags.go:64] FLAG: --eviction-soft="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746153 4814 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746159 4814 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746165 4814 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746171 4814 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746177 4814 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746183 4814 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746189 4814 flags.go:64] FLAG: --feature-gates="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746197 4814 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746203 4814 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746210 4814 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746216 4814 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746222 4814 flags.go:64] FLAG: --healthz-port="10248" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746228 4814 flags.go:64] FLAG: --help="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746235 4814 flags.go:64] FLAG: --hostname-override="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746241 4814 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746247 4814 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746253 4814 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746259 4814 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746265 4814 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746271 4814 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746277 4814 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746283 4814 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746289 4814 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746295 4814 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746302 4814 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746308 4814 flags.go:64] FLAG: --kube-reserved="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746314 4814 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746323 4814 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746330 4814 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746336 4814 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746343 4814 flags.go:64] FLAG: --lock-file="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746349 4814 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746355 4814 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746362 4814 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746372 4814 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746378 4814 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746384 4814 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746389 4814 flags.go:64] FLAG: --logging-format="text" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746395 4814 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746402 4814 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746409 4814 flags.go:64] FLAG: --manifest-url="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746416 4814 flags.go:64] FLAG: --manifest-url-header="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746424 4814 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746430 4814 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746438 4814 flags.go:64] FLAG: --max-pods="110" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746445 4814 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746451 4814 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746457 4814 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746463 4814 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746469 4814 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746476 4814 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746482 4814 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746497 4814 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746504 4814 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746510 4814 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746517 4814 flags.go:64] FLAG: --pod-cidr="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746523 4814 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746559 4814 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746567 4814 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746575 4814 flags.go:64] FLAG: --pods-per-core="0" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746584 4814 flags.go:64] FLAG: --port="10250" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746598 4814 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746607 4814 flags.go:64] FLAG: --provider-id="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746615 4814 flags.go:64] FLAG: --qos-reserved="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746623 4814 flags.go:64] FLAG: --read-only-port="10255" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746630 4814 flags.go:64] FLAG: --register-node="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746637 4814 flags.go:64] FLAG: --register-schedulable="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746644 4814 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746656 4814 flags.go:64] FLAG: --registry-burst="10" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746662 4814 flags.go:64] FLAG: --registry-qps="5" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746669 4814 flags.go:64] FLAG: --reserved-cpus="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746676 4814 flags.go:64] FLAG: --reserved-memory="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746684 4814 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746691 4814 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746698 4814 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746704 4814 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746710 4814 flags.go:64] FLAG: --runonce="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746717 4814 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746723 4814 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746730 4814 flags.go:64] FLAG: --seccomp-default="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746736 4814 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746743 4814 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746749 4814 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746755 4814 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746762 4814 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746768 4814 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746774 4814 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746781 4814 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746787 4814 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746794 4814 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746800 4814 flags.go:64] FLAG: --system-cgroups="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746807 4814 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746817 4814 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746823 4814 flags.go:64] FLAG: --tls-cert-file="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746829 4814 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746838 4814 flags.go:64] FLAG: --tls-min-version="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746844 4814 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746851 4814 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746857 4814 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746864 4814 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746870 4814 flags.go:64] FLAG: --v="2" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746879 4814 flags.go:64] FLAG: --version="false" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746887 4814 flags.go:64] FLAG: --vmodule="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746895 4814 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.746901 4814 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747052 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747058 4814 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747064 4814 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747070 4814 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747076 4814 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747082 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747088 4814 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747095 4814 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747102 4814 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747109 4814 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747115 4814 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747120 4814 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747126 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747133 4814 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747139 4814 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747146 4814 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747153 4814 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747160 4814 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747166 4814 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747172 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747178 4814 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747184 4814 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747189 4814 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747195 4814 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747200 4814 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747206 4814 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747211 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747216 4814 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747235 4814 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747240 4814 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747246 4814 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747251 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747256 4814 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747261 4814 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747266 4814 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747273 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747279 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747284 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747289 4814 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747294 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747300 4814 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747305 4814 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747310 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747317 4814 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747322 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747327 4814 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747333 4814 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747338 4814 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747345 4814 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747352 4814 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747358 4814 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747380 4814 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747386 4814 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747392 4814 feature_gate.go:330] unrecognized feature gate: Example Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747397 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747403 4814 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747409 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747414 4814 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747419 4814 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747425 4814 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747431 4814 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747436 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747441 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747446 4814 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747455 4814 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747461 4814 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747466 4814 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747477 4814 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747483 4814 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747488 4814 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.747493 4814 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.747511 4814 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.762039 4814 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.762105 4814 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762249 4814 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762266 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762279 4814 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762293 4814 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762306 4814 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762316 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762325 4814 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762334 4814 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762343 4814 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762352 4814 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762361 4814 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762371 4814 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762381 4814 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762391 4814 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762400 4814 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762409 4814 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762417 4814 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762429 4814 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762442 4814 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762455 4814 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762464 4814 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762473 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762482 4814 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762491 4814 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762499 4814 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762508 4814 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762528 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762564 4814 feature_gate.go:330] unrecognized feature gate: Example Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762573 4814 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762582 4814 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762591 4814 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762601 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762610 4814 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762620 4814 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762629 4814 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762639 4814 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762648 4814 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762657 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762665 4814 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762677 4814 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762688 4814 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762697 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762706 4814 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762716 4814 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762725 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762734 4814 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762742 4814 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762751 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762759 4814 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762768 4814 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762776 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762785 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762794 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762802 4814 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762811 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762821 4814 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762829 4814 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762838 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762847 4814 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762855 4814 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762864 4814 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762873 4814 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762881 4814 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762889 4814 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762901 4814 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762912 4814 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762921 4814 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762930 4814 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762940 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762949 4814 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.762959 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.762974 4814 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763253 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763268 4814 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763278 4814 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763287 4814 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763296 4814 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763304 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763313 4814 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763321 4814 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763330 4814 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763339 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763347 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763356 4814 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763364 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763373 4814 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763388 4814 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763399 4814 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763408 4814 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763418 4814 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763427 4814 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763437 4814 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763445 4814 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763453 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763462 4814 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763470 4814 feature_gate.go:330] unrecognized feature gate: Example Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763480 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763488 4814 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763497 4814 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763509 4814 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763519 4814 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763529 4814 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763570 4814 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763579 4814 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763587 4814 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763596 4814 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763604 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763613 4814 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763625 4814 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763636 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763646 4814 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763659 4814 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763669 4814 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763680 4814 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763689 4814 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763699 4814 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763708 4814 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763716 4814 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763726 4814 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763734 4814 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763743 4814 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763751 4814 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763759 4814 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763768 4814 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763776 4814 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763784 4814 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763793 4814 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763803 4814 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763812 4814 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763820 4814 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763829 4814 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763837 4814 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763846 4814 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763854 4814 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763863 4814 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763871 4814 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763879 4814 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763888 4814 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763896 4814 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763904 4814 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763913 4814 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763922 4814 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.763930 4814 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.763951 4814 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.765427 4814 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.772621 4814 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.772777 4814 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.775617 4814 server.go:997] "Starting client certificate rotation" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.775675 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.775981 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-08 22:15:22.698105357 +0000 UTC Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.776188 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.801776 4814 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.803786 4814 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 09:45:42 crc kubenswrapper[4814]: E0216 09:45:42.805763 4814 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.826114 4814 log.go:25] "Validated CRI v1 runtime API" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.869937 4814 log.go:25] "Validated CRI v1 image API" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.872952 4814 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.881857 4814 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-09-40-40-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.881953 4814 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.906948 4814 manager.go:217] Machine: {Timestamp:2026-02-16 09:45:42.902822961 +0000 UTC m=+0.595979161 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:229af786-ea3b-485b-b39a-f6a3c0e23f09 BootID:fefaad58-c4d3-4766-b042-986d2228ca91 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f4:70:82 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f4:70:82 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b7:40:c3 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:8d:27:ec Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1f:ad:f7 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:57:83:a8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:d8:00:17:b1:5f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:8a:c0:e7:5d:e4:45 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.907253 4814 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.907451 4814 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.907922 4814 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.908174 4814 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.908230 4814 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.908525 4814 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.908581 4814 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.909301 4814 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.909343 4814 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.910159 4814 state_mem.go:36] "Initialized new in-memory state store" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.910275 4814 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.914601 4814 kubelet.go:418] "Attempting to sync node with API server" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.914657 4814 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.914683 4814 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.914699 4814 kubelet.go:324] "Adding apiserver pod source" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.914719 4814 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.919699 4814 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.921676 4814 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.923872 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:42 crc kubenswrapper[4814]: E0216 09:45:42.923980 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.924031 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:42 crc kubenswrapper[4814]: E0216 09:45:42.924152 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.924702 4814 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926406 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926448 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926462 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926475 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926498 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926511 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926524 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926570 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926585 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926599 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926641 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.926661 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.931965 4814 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.932364 4814 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.933332 4814 server.go:1280] "Started kubelet" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.934738 4814 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.934740 4814 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.935460 4814 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 09:45:42 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.937593 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.937674 4814 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.938173 4814 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.938162 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:28:23.062385681 +0000 UTC Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.938202 4814 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.938215 4814 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 09:45:42 crc kubenswrapper[4814]: E0216 09:45:42.938911 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="200ms" Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.938932 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:42 crc kubenswrapper[4814]: E0216 09:45:42.939172 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.939276 4814 factory.go:55] Registering systemd factory Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.939340 4814 factory.go:221] Registration of the systemd container factory successfully Feb 16 09:45:42 crc kubenswrapper[4814]: E0216 09:45:42.938853 4814 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.940237 4814 factory.go:153] Registering CRI-O factory Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.940271 4814 server.go:460] "Adding debug handlers to kubelet server" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.940283 4814 factory.go:221] Registration of the crio container factory successfully Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.940647 4814 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.940688 4814 factory.go:103] Registering Raw factory Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.940711 4814 manager.go:1196] Started watching for new ooms in manager Feb 16 09:45:42 crc kubenswrapper[4814]: E0216 09:45:42.940980 4814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894b0fa643a118c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 09:45:42.933287308 +0000 UTC m=+0.626443498,LastTimestamp:2026-02-16 09:45:42.933287308 +0000 UTC m=+0.626443498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.943578 4814 manager.go:319] Starting recovery of all containers Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.948794 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.949270 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.949699 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.949745 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.949783 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.949806 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.949855 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.949888 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.949916 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.949946 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.949968 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950001 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950024 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950060 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950082 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950109 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950133 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950198 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950225 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950243 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950265 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950281 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950295 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950510 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950572 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.950617 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.951023 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.951061 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.951093 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.951118 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.951147 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.951168 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.951451 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.951476 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.952463 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.952608 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.952688 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.952765 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.952840 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.952935 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.953008 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.953082 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.953145 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.953218 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.953280 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.953340 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.953407 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.953480 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954087 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954137 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954148 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954160 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954184 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954201 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954219 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954234 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954249 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954305 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954319 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954331 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954346 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954358 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954369 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954382 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954399 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954412 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954425 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954439 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954451 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954465 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954478 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954490 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954505 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954516 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954526 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954626 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954641 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954653 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954667 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954679 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954693 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954706 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954717 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954728 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954740 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954752 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954765 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954775 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954787 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954798 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954811 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954826 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954837 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954852 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954865 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954877 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954890 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954908 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954919 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954933 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954944 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954957 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954971 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.954982 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955003 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955016 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955028 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955118 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955132 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955145 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955157 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955169 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955181 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955194 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955203 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955216 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955225 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955237 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955249 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955259 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955269 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955284 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955295 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955306 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955317 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955328 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955339 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955349 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955359 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955371 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955382 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955392 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955402 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955412 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955423 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955433 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955446 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955459 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955497 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955508 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955520 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955547 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955559 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955573 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955584 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955598 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955609 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955620 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955631 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955644 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955655 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955667 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955678 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955691 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955702 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955715 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955727 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955742 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955753 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955766 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955778 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955793 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955804 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955815 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955825 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955836 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955849 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955866 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955878 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955890 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955900 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955914 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955928 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955940 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955956 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955968 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955983 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.955998 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956011 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956023 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956032 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956043 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956054 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956065 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956077 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956090 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956101 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956113 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956124 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956134 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956146 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956156 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956169 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956182 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956194 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956204 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956213 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956223 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956232 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956244 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956254 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956265 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956277 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956288 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956301 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.956313 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.958803 4814 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.958831 4814 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.958846 4814 reconstruct.go:97] "Volume reconstruction finished" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.958856 4814 reconciler.go:26] "Reconciler: start to sync state" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.977975 4814 manager.go:324] Recovery completed Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.988912 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.989108 4814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.991897 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.991939 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.991949 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.992160 4814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.992196 4814 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.992224 4814 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 09:45:42 crc kubenswrapper[4814]: E0216 09:45:42.992339 4814 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.992982 4814 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.993002 4814 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 09:45:42 crc kubenswrapper[4814]: I0216 09:45:42.993020 4814 state_mem.go:36] "Initialized new in-memory state store" Feb 16 09:45:42 crc kubenswrapper[4814]: W0216 09:45:42.993289 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:42 crc kubenswrapper[4814]: E0216 09:45:42.993397 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.018959 4814 policy_none.go:49] "None policy: Start" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.020204 4814 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.020345 4814 state_mem.go:35] "Initializing new in-memory state store" Feb 16 09:45:43 crc kubenswrapper[4814]: E0216 09:45:43.040344 4814 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.069672 4814 manager.go:334] "Starting Device Plugin manager" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.069750 4814 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.069769 4814 server.go:79] "Starting device plugin registration server" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.070357 4814 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.070372 4814 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.070639 4814 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.070733 4814 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.070742 4814 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 09:45:43 crc kubenswrapper[4814]: E0216 09:45:43.079021 4814 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.092594 4814 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.092688 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.096854 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.096925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.096945 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.097202 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.097767 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.097821 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.101668 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.101722 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.101749 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.102060 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.102578 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.102661 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.102935 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.103112 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.103301 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.103336 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.103347 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.103308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.103892 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.103921 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.104890 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.104917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.103958 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.105082 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.105953 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.105976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.105985 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.106853 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.106883 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.106892 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.107044 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.107497 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.107583 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.107878 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.107914 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.107928 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.108166 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.108204 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.108580 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.108763 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.108896 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.110788 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.110815 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.110825 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: E0216 09:45:43.140322 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="400ms" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.161445 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.161497 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.161579 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.161632 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.161657 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.161703 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.161755 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.161849 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.161898 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.161941 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.161983 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.162050 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.162074 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.162099 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.162144 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.171097 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.173027 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.173072 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.173085 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.173122 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 09:45:43 crc kubenswrapper[4814]: E0216 09:45:43.173817 4814 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.263991 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264071 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264106 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264139 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264174 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264244 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264305 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264337 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264370 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264397 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264426 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264484 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264518 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264582 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264638 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264875 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264945 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264962 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264932 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.265222 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.265019 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.265052 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.265063 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.265095 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.265069 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.265114 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.265066 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.265132 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.265149 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.264876 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.374871 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.376861 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.376906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.376919 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.376946 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 09:45:43 crc kubenswrapper[4814]: E0216 09:45:43.377524 4814 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.442471 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.452857 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.481777 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.492920 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: W0216 09:45:43.493563 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-bf9ecc91b88d9b72033786a5802479b7d58f655f01917194040928dd858309d7 WatchSource:0}: Error finding container bf9ecc91b88d9b72033786a5802479b7d58f655f01917194040928dd858309d7: Status 404 returned error can't find the container with id bf9ecc91b88d9b72033786a5802479b7d58f655f01917194040928dd858309d7 Feb 16 09:45:43 crc kubenswrapper[4814]: W0216 09:45:43.500153 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-eac8beb54c26915bc1fe7babbd97a098a267b79da9660a7fac43818abd678217 WatchSource:0}: Error finding container eac8beb54c26915bc1fe7babbd97a098a267b79da9660a7fac43818abd678217: Status 404 returned error can't find the container with id eac8beb54c26915bc1fe7babbd97a098a267b79da9660a7fac43818abd678217 Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.516451 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 09:45:43 crc kubenswrapper[4814]: W0216 09:45:43.522643 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-bdbab60e6e4e1b2fbf771817b45b53b632f1d35178f4e1dcc6f77e88daedc6ac WatchSource:0}: Error finding container bdbab60e6e4e1b2fbf771817b45b53b632f1d35178f4e1dcc6f77e88daedc6ac: Status 404 returned error can't find the container with id bdbab60e6e4e1b2fbf771817b45b53b632f1d35178f4e1dcc6f77e88daedc6ac Feb 16 09:45:43 crc kubenswrapper[4814]: W0216 09:45:43.523808 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ef2bb5427e95aea43188a23b63df8afd4120141b525488efc48857e8891df5e6 WatchSource:0}: Error finding container ef2bb5427e95aea43188a23b63df8afd4120141b525488efc48857e8891df5e6: Status 404 returned error can't find the container with id ef2bb5427e95aea43188a23b63df8afd4120141b525488efc48857e8891df5e6 Feb 16 09:45:43 crc kubenswrapper[4814]: E0216 09:45:43.541954 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="800ms" Feb 16 09:45:43 crc kubenswrapper[4814]: W0216 09:45:43.553077 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f708760524534c960e6e567ce4e1c3d1803f6bf7e0b006f62ea6e58dfe94bac0 WatchSource:0}: Error finding container f708760524534c960e6e567ce4e1c3d1803f6bf7e0b006f62ea6e58dfe94bac0: Status 404 returned error can't find the container with id f708760524534c960e6e567ce4e1c3d1803f6bf7e0b006f62ea6e58dfe94bac0 Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.777719 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.779469 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.779562 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.779577 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.779615 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 09:45:43 crc kubenswrapper[4814]: E0216 09:45:43.780656 4814 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.933685 4814 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.938686 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:20:10.941154444 +0000 UTC Feb 16 09:45:43 crc kubenswrapper[4814]: I0216 09:45:43.998788 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bf9ecc91b88d9b72033786a5802479b7d58f655f01917194040928dd858309d7"} Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:43.999960 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"eac8beb54c26915bc1fe7babbd97a098a267b79da9660a7fac43818abd678217"} Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:44.000852 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f708760524534c960e6e567ce4e1c3d1803f6bf7e0b006f62ea6e58dfe94bac0"} Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:44.002263 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bdbab60e6e4e1b2fbf771817b45b53b632f1d35178f4e1dcc6f77e88daedc6ac"} Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:44.003474 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ef2bb5427e95aea43188a23b63df8afd4120141b525488efc48857e8891df5e6"} Feb 16 09:45:44 crc kubenswrapper[4814]: W0216 09:45:44.031281 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:44 crc kubenswrapper[4814]: E0216 09:45:44.031364 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:44 crc kubenswrapper[4814]: W0216 09:45:44.078614 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:44 crc kubenswrapper[4814]: E0216 09:45:44.078720 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:44 crc kubenswrapper[4814]: W0216 09:45:44.203948 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:44 crc kubenswrapper[4814]: E0216 09:45:44.204501 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:44 crc kubenswrapper[4814]: E0216 09:45:44.343897 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="1.6s" Feb 16 09:45:44 crc kubenswrapper[4814]: W0216 09:45:44.403037 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:44 crc kubenswrapper[4814]: E0216 09:45:44.403136 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:44 crc kubenswrapper[4814]: E0216 09:45:44.449577 4814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894b0fa643a118c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 09:45:42.933287308 +0000 UTC m=+0.626443498,LastTimestamp:2026-02-16 09:45:42.933287308 +0000 UTC m=+0.626443498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:44.581181 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:44.583690 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:44.583758 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:44.583781 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:44.583848 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 09:45:44 crc kubenswrapper[4814]: E0216 09:45:44.584636 4814 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:44.933588 4814 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:44.939729 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:21:28.089775346 +0000 UTC Feb 16 09:45:44 crc kubenswrapper[4814]: I0216 09:45:44.956142 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 09:45:44 crc kubenswrapper[4814]: E0216 09:45:44.957518 4814 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.017016 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d"} Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.017089 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d"} Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.017114 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088"} Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.017136 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6"} Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.017262 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.018757 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.018809 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.018829 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.021001 4814 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="281bcdc92d2ba29365324ee3e09387564c9efd7ad3d69db19c1e815174bcbec1" exitCode=0 Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.021078 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"281bcdc92d2ba29365324ee3e09387564c9efd7ad3d69db19c1e815174bcbec1"} Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.021291 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.027890 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.027947 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.027969 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.028358 4814 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d" exitCode=0 Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.028578 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.028618 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d"} Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.030659 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.030898 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.031174 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.031929 4814 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44" exitCode=0 Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.032081 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44"} Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.032103 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.033495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.033597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.033627 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.034424 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227" exitCode=0 Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.034500 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227"} Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.034652 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.035990 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.036035 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.036054 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.038302 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.039876 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.039939 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.039960 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.933304 4814 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:45 crc kubenswrapper[4814]: I0216 09:45:45.939843 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 14:11:51.891316738 +0000 UTC Feb 16 09:45:45 crc kubenswrapper[4814]: E0216 09:45:45.945333 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="3.2s" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.042901 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23"} Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.042972 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15"} Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.042934 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.042990 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3"} Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.044739 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.044805 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.044825 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.048237 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a"} Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.048287 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f"} Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.048308 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367"} Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.052783 4814 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c3788ce28ee42eb21cdd6001d61c29b269847a7483beb3b915fd38381ff73e05" exitCode=0 Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.052877 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c3788ce28ee42eb21cdd6001d61c29b269847a7483beb3b915fd38381ff73e05"} Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.052982 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.054862 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.054947 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.054977 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.056871 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.056863 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7003af8bdba8161a363e9f39525d354e17f4402fd151424cb692ee75cc0d2294"} Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.056883 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.058767 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.058814 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.058819 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.058853 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.058873 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.058894 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:46 crc kubenswrapper[4814]: W0216 09:45:46.180189 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:46 crc kubenswrapper[4814]: E0216 09:45:46.180334 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.185081 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.186675 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.186733 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.186763 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.186819 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 09:45:46 crc kubenswrapper[4814]: E0216 09:45:46.187565 4814 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.73:6443: connect: connection refused" node="crc" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.878468 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.935486 4814 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:46 crc kubenswrapper[4814]: I0216 09:45:46.940528 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:34:13.729138985 +0000 UTC Feb 16 09:45:47 crc kubenswrapper[4814]: W0216 09:45:47.021145 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:47 crc kubenswrapper[4814]: E0216 09:45:47.021227 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.067016 4814 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d08228cebc15dfa4a75fb05cc2da2fe9a55ec52918b193a7ee0c41f68d1c7553" exitCode=0 Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.067125 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d08228cebc15dfa4a75fb05cc2da2fe9a55ec52918b193a7ee0c41f68d1c7553"} Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.067322 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.068727 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.068756 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.068765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.070591 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.073518 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0a6ec5bed3774f2f4c93eff84337879c9aed0eebeaa50aa2d34d0468246c03a6" exitCode=255 Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.073630 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.074090 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.074386 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0a6ec5bed3774f2f4c93eff84337879c9aed0eebeaa50aa2d34d0468246c03a6"} Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.074424 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7"} Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.074477 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.074499 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.074858 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076123 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076145 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076154 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076171 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076179 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076242 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076439 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076124 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076471 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076478 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.076785 4814 scope.go:117] "RemoveContainer" containerID="0a6ec5bed3774f2f4c93eff84337879c9aed0eebeaa50aa2d34d0468246c03a6" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.086855 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:47 crc kubenswrapper[4814]: W0216 09:45:47.119933 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:47 crc kubenswrapper[4814]: E0216 09:45:47.120025 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.133253 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:47 crc kubenswrapper[4814]: W0216 09:45:47.174793 4814 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.73:6443: connect: connection refused Feb 16 09:45:47 crc kubenswrapper[4814]: E0216 09:45:47.174889 4814 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.73:6443: connect: connection refused" logger="UnhandledError" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.660051 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:47 crc kubenswrapper[4814]: I0216 09:45:47.941377 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:48:29.974596696 +0000 UTC Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.079701 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.081843 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602"} Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.082040 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.083137 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.083212 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.083232 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.091819 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"00eb6e4414d01cf4ff089e7d6bfa97ff2399c6537ba32753bfd42f2806df3cde"} Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.091875 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1da5ab9d682f860b9b627cb86a2cc3f82161f0cb838da3a1ac8aa4b17af307de"} Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.091896 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.091898 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"94df4c1b44f0a9ae7db95b06fddd70b3873621ece6f3c07061807717d54ee893"} Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.092061 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"016ccf39b93497ac3ff524f80fd3adcd4fc46cad23c1c8b78741614752497931"} Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.092891 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.092957 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.092978 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.520846 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.727988 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:48 crc kubenswrapper[4814]: I0216 09:45:48.942148 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:05:33.851956246 +0000 UTC Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.101107 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e3a5016e7ea0e85c8f3f469ee5bb299ee7f4c79664eeac8f70bec71ddc3ed607"} Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.101179 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.101237 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.101282 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.101283 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.102728 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.102779 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.102796 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.102822 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.102852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.102867 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.103873 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.103909 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.103925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.230142 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.388606 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.390336 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.390385 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.390397 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.390424 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 09:45:49 crc kubenswrapper[4814]: I0216 09:45:49.942339 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:55:21.597919819 +0000 UTC Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.104097 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.104145 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.104217 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.104233 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.105763 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.105798 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.105802 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.105812 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.105830 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.105844 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.105901 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.105941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.105955 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.523687 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.523957 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.525720 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.525798 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.525823 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:50 crc kubenswrapper[4814]: I0216 09:45:50.942936 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 17:01:47.603367321 +0000 UTC Feb 16 09:45:51 crc kubenswrapper[4814]: I0216 09:45:51.144697 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 09:45:51 crc kubenswrapper[4814]: I0216 09:45:51.144906 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:51 crc kubenswrapper[4814]: I0216 09:45:51.146660 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:51 crc kubenswrapper[4814]: I0216 09:45:51.146743 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:51 crc kubenswrapper[4814]: I0216 09:45:51.146764 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:51 crc kubenswrapper[4814]: I0216 09:45:51.943671 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:09:20.580251101 +0000 UTC Feb 16 09:45:52 crc kubenswrapper[4814]: I0216 09:45:52.239388 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:45:52 crc kubenswrapper[4814]: I0216 09:45:52.239775 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:52 crc kubenswrapper[4814]: I0216 09:45:52.241621 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:52 crc kubenswrapper[4814]: I0216 09:45:52.241680 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:52 crc kubenswrapper[4814]: I0216 09:45:52.241700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:52 crc kubenswrapper[4814]: I0216 09:45:52.944211 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:53:15.454025072 +0000 UTC Feb 16 09:45:52 crc kubenswrapper[4814]: I0216 09:45:52.975898 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:52 crc kubenswrapper[4814]: I0216 09:45:52.976162 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:52 crc kubenswrapper[4814]: I0216 09:45:52.978179 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:52 crc kubenswrapper[4814]: I0216 09:45:52.978224 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:52 crc kubenswrapper[4814]: I0216 09:45:52.978239 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:53 crc kubenswrapper[4814]: E0216 09:45:53.079151 4814 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 09:45:53 crc kubenswrapper[4814]: I0216 09:45:53.677066 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 09:45:53 crc kubenswrapper[4814]: I0216 09:45:53.677348 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:53 crc kubenswrapper[4814]: I0216 09:45:53.679626 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:53 crc kubenswrapper[4814]: I0216 09:45:53.679705 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:53 crc kubenswrapper[4814]: I0216 09:45:53.679728 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:53 crc kubenswrapper[4814]: I0216 09:45:53.944656 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:26:02.811746378 +0000 UTC Feb 16 09:45:54 crc kubenswrapper[4814]: I0216 09:45:54.945391 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:30:02.867272042 +0000 UTC Feb 16 09:45:55 crc kubenswrapper[4814]: I0216 09:45:55.946342 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 01:25:44.410649503 +0000 UTC Feb 16 09:45:55 crc kubenswrapper[4814]: I0216 09:45:55.976937 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 09:45:55 crc kubenswrapper[4814]: I0216 09:45:55.977065 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 09:45:56 crc kubenswrapper[4814]: I0216 09:45:56.884175 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:45:56 crc kubenswrapper[4814]: I0216 09:45:56.884430 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:45:56 crc kubenswrapper[4814]: I0216 09:45:56.886983 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:45:56 crc kubenswrapper[4814]: I0216 09:45:56.887018 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:45:56 crc kubenswrapper[4814]: I0216 09:45:56.887030 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:45:56 crc kubenswrapper[4814]: I0216 09:45:56.947006 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:22:09.038811722 +0000 UTC Feb 16 09:45:57 crc kubenswrapper[4814]: I0216 09:45:57.660567 4814 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 09:45:57 crc kubenswrapper[4814]: I0216 09:45:57.660693 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 09:45:57 crc kubenswrapper[4814]: I0216 09:45:57.712475 4814 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 09:45:57 crc kubenswrapper[4814]: I0216 09:45:57.712623 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 09:45:57 crc kubenswrapper[4814]: I0216 09:45:57.948128 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:59:38.147286624 +0000 UTC Feb 16 09:45:58 crc kubenswrapper[4814]: I0216 09:45:58.948518 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:18:25.620552235 +0000 UTC Feb 16 09:45:59 crc kubenswrapper[4814]: I0216 09:45:59.948665 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:00:40.439462564 +0000 UTC Feb 16 09:46:00 crc kubenswrapper[4814]: I0216 09:46:00.949761 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:35:43.884655524 +0000 UTC Feb 16 09:46:01 crc kubenswrapper[4814]: I0216 09:46:01.180348 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 09:46:01 crc kubenswrapper[4814]: I0216 09:46:01.180680 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:46:01 crc kubenswrapper[4814]: I0216 09:46:01.182365 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:01 crc kubenswrapper[4814]: I0216 09:46:01.182407 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:01 crc kubenswrapper[4814]: I0216 09:46:01.182417 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:01 crc kubenswrapper[4814]: I0216 09:46:01.201216 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 09:46:01 crc kubenswrapper[4814]: I0216 09:46:01.950966 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:13:35.111528865 +0000 UTC Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.142154 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.143644 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.143692 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.143702 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.240360 4814 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.240441 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.665569 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.666185 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.666694 4814 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.666763 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.667605 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.667731 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.667838 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.670259 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:46:02 crc kubenswrapper[4814]: E0216 09:46:02.704062 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.706293 4814 trace.go:236] Trace[829685141]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 09:45:49.606) (total time: 13099ms): Feb 16 09:46:02 crc kubenswrapper[4814]: Trace[829685141]: ---"Objects listed" error: 13099ms (09:46:02.706) Feb 16 09:46:02 crc kubenswrapper[4814]: Trace[829685141]: [13.099670114s] [13.099670114s] END Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.706336 4814 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.707861 4814 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.708078 4814 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 09:46:02 crc kubenswrapper[4814]: E0216 09:46:02.708885 4814 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.709844 4814 trace.go:236] Trace[1916617949]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 09:45:50.379) (total time: 12330ms): Feb 16 09:46:02 crc kubenswrapper[4814]: Trace[1916617949]: ---"Objects listed" error: 12329ms (09:46:02.709) Feb 16 09:46:02 crc kubenswrapper[4814]: Trace[1916617949]: [12.330033223s] [12.330033223s] END Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.709987 4814 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.714152 4814 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.719238 4814 trace.go:236] Trace[981450350]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 09:45:52.190) (total time: 10529ms): Feb 16 09:46:02 crc kubenswrapper[4814]: Trace[981450350]: ---"Objects listed" error: 10529ms (09:46:02.719) Feb 16 09:46:02 crc kubenswrapper[4814]: Trace[981450350]: [10.529170442s] [10.529170442s] END Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.719274 4814 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.732987 4814 csr.go:261] certificate signing request csr-jrmzx is approved, waiting to be issued Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.742723 4814 csr.go:257] certificate signing request csr-jrmzx is issued Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.776275 4814 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 09:46:02 crc kubenswrapper[4814]: W0216 09:46:02.776550 4814 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 09:46:02 crc kubenswrapper[4814]: W0216 09:46:02.776579 4814 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 09:46:02 crc kubenswrapper[4814]: W0216 09:46:02.776650 4814 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 09:46:02 crc kubenswrapper[4814]: W0216 09:46:02.776596 4814 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 09:46:02 crc kubenswrapper[4814]: E0216 09:46:02.776680 4814 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events/crc.1894b0fa67b91e9b\": read tcp 38.102.83.73:45582->38.102.83.73:6443: use of closed network connection" event="&Event{ObjectMeta:{crc.1894b0fa67b91e9b default 26174 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 09:45:42 +0000 UTC,LastTimestamp:2026-02-16 09:45:43.103293069 +0000 UTC m=+0.796449289,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.927729 4814 apiserver.go:52] "Watching apiserver" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.935704 4814 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.936035 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.936471 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.936586 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.936617 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 09:46:02 crc kubenswrapper[4814]: E0216 09:46:02.937013 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.937114 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.937139 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 09:46:02 crc kubenswrapper[4814]: E0216 09:46:02.937176 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.937205 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:02 crc kubenswrapper[4814]: E0216 09:46:02.937292 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.939094 4814 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.941431 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.941516 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.941567 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.941432 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.941875 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.942278 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.942429 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.942196 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.945022 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.952151 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:39:30.292927501 +0000 UTC Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.970336 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:02 crc kubenswrapper[4814]: I0216 09:46:02.986317 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.007123 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.009653 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010252 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010308 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010349 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010387 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010419 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010454 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010484 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010517 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010585 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010621 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010654 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010687 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010720 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010747 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010769 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010793 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010819 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010846 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010882 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010911 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010943 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010985 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.011011 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.011040 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.011072 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.011147 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.011182 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.011220 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.011254 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.011288 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.011313 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.010187 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.011832 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012047 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012161 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012351 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012455 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012561 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012645 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012730 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012818 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012884 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012953 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013020 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013090 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013159 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013227 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013291 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013357 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013425 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013560 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013638 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013715 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013806 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013873 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013956 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014023 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014088 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014156 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014221 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014299 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014371 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014451 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014520 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014619 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014702 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014782 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014851 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014923 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014990 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015064 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015139 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015213 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015283 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015355 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015427 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015489 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015585 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015655 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015735 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015825 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015896 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015975 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016048 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016121 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016196 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016268 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016339 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016408 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016482 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016590 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016671 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016757 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016830 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016901 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016973 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017040 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017112 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017179 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017246 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017317 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017382 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017444 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017508 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017593 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017676 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017777 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017852 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017921 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017987 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018063 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018131 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018199 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018268 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018336 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018410 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018480 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018570 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018644 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018714 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018805 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018882 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018950 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019017 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019084 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019158 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019228 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019294 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019361 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019426 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019509 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019647 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019728 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019810 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019882 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020056 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020136 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020205 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020274 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020351 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020427 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020499 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020587 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020660 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020729 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020812 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020890 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020957 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021031 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021102 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021178 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021245 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021315 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021383 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021450 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021524 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021613 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021681 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021831 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021981 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022076 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022179 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022505 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022606 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022675 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022743 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022835 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022912 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022981 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023049 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023119 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023189 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023418 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023493 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023551 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023595 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023630 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023662 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023694 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023726 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023755 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023788 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023824 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023863 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023895 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023931 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023963 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023994 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024025 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024052 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024084 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024113 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024147 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024172 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024200 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024228 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024300 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024339 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024374 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024404 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024433 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024469 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024505 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024587 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024628 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024661 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024692 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024720 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024750 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024782 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024907 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024935 4814 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012072 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.012493 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013038 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013113 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013450 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013454 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013588 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013804 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.027308 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013835 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.013887 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014014 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014011 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014268 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014303 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014348 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014571 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.027478 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014756 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.014776 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015012 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015179 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015238 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015294 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015233 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015321 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015507 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015578 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015695 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015772 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.015975 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016109 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016248 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016274 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016335 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016365 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016636 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016672 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016817 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016940 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.016964 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017178 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017178 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017262 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017458 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017464 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017471 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017581 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017788 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017804 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.017822 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018210 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018375 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018433 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018664 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018661 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018721 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.018971 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019000 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019026 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019100 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019110 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019272 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019425 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019638 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019800 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019863 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019890 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.019901 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020273 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020313 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020354 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020729 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020794 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.020954 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021226 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021434 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021770 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021833 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.021956 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022126 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022376 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022624 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022879 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.022990 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023338 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023436 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.023818 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024742 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.024821 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.025015 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.025038 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.025181 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.025439 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.025792 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.025910 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.025979 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.026019 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.026273 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.026357 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.026571 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.026615 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.026935 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.027062 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.027189 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.027561 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.027743 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.027752 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.028088 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.028220 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.027583 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.028307 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.028321 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.028626 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.028676 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.028703 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.028723 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.029156 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.029192 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.029447 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.029839 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.030022 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.030279 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.030304 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.030553 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.030880 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:46:03.530852111 +0000 UTC m=+21.224008281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.031287 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.031437 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:03.531412215 +0000 UTC m=+21.224568385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.031561 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.030478 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.031642 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.032129 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.032214 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.032345 4814 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.032417 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.033959 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.034618 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:03.534592383 +0000 UTC m=+21.227748563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.034757 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.034817 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.034911 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035060 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035349 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035581 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035496 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035503 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035615 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035639 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035511 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035667 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035887 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035989 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035996 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.036050 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.036478 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.036619 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.036619 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.036747 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.036771 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.036875 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.037032 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.037437 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.037675 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.037902 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.035345 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.038221 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.038273 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.039586 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.040268 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.040447 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.040485 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.042172 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.046351 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.046593 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.046628 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.046645 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.046727 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:03.546701112 +0000 UTC m=+21.239857302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.050099 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.053054 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.053216 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.055640 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.056432 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.056578 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.056615 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.056617 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.056684 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.056777 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:03.55675028 +0000 UTC m=+21.249906470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.057452 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.058309 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.059231 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.059783 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.059936 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.060002 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.060184 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.060383 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.060568 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.060746 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.061424 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.061586 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.062208 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.062425 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.063169 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.063205 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.063708 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.064840 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.064958 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.065462 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.068363 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.068836 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.070826 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.071876 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.072303 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.072437 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.072546 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.072741 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.073200 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.077740 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.089623 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.092122 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.103396 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.107368 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.113022 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.124664 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126030 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126131 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126214 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126233 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126280 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126374 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126518 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126592 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126615 4814 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126626 4814 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126640 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126653 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126665 4814 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126676 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126689 4814 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126700 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126713 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126724 4814 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126743 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126754 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126767 4814 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126779 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126789 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126799 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126811 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126822 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126834 4814 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126845 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126855 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126866 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126877 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126888 4814 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126897 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126920 4814 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126932 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126942 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126951 4814 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126974 4814 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126984 4814 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.126994 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127005 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127017 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127026 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127036 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127045 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127055 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127065 4814 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127074 4814 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127084 4814 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127100 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127111 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127122 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127133 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127143 4814 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127154 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127164 4814 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127174 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127185 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127194 4814 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127203 4814 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127213 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127221 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127231 4814 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127240 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127249 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127259 4814 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127267 4814 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127277 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127287 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127295 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127305 4814 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127314 4814 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127324 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127333 4814 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127342 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127352 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127361 4814 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127370 4814 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127379 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127388 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127399 4814 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127409 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127418 4814 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127428 4814 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127437 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127448 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127457 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127465 4814 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127473 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127482 4814 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127492 4814 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127500 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127508 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127518 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127530 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127552 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127561 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127569 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127578 4814 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127587 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127596 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127605 4814 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127614 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127623 4814 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127632 4814 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127641 4814 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127649 4814 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127657 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127667 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127676 4814 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127684 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127693 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127701 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127710 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127719 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127728 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127737 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127747 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127756 4814 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127765 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127774 4814 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127783 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127791 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127802 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127811 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127820 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127830 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127839 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127848 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127858 4814 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127868 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127876 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127885 4814 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127894 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127902 4814 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127911 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127919 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127928 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127937 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127947 4814 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127955 4814 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127964 4814 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127973 4814 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127981 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.127994 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128003 4814 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128014 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128023 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128031 4814 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128040 4814 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128050 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128060 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128069 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128077 4814 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128088 4814 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128096 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128104 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128114 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128122 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128131 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128139 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128148 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128158 4814 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128166 4814 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128175 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128187 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128197 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128207 4814 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128215 4814 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128226 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128235 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128244 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128253 4814 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128261 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128269 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128278 4814 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128289 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128298 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128306 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128315 4814 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128324 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128332 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128341 4814 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128350 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128358 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128368 4814 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128376 4814 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128384 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128393 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128402 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128411 4814 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128420 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128428 4814 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128438 4814 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128447 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.128457 4814 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.137949 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.148845 4814 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.149072 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.152781 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.163609 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.165181 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.250885 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 09:46:03 crc kubenswrapper[4814]: W0216 09:46:03.265678 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-3de32cfb3fe92b3e44152bded51e03a08b03d9f3cb40b2dc8111f7feef628516 WatchSource:0}: Error finding container 3de32cfb3fe92b3e44152bded51e03a08b03d9f3cb40b2dc8111f7feef628516: Status 404 returned error can't find the container with id 3de32cfb3fe92b3e44152bded51e03a08b03d9f3cb40b2dc8111f7feef628516 Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.265856 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 09:46:03 crc kubenswrapper[4814]: W0216 09:46:03.280522 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-7bd1ca1af28fc247010a90240f5fad2883d7feb6d33d6bcf4cf88a96ae56f14d WatchSource:0}: Error finding container 7bd1ca1af28fc247010a90240f5fad2883d7feb6d33d6bcf4cf88a96ae56f14d: Status 404 returned error can't find the container with id 7bd1ca1af28fc247010a90240f5fad2883d7feb6d33d6bcf4cf88a96ae56f14d Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.284841 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.535512 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.535632 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.535723 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.535746 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:46:04.535701335 +0000 UTC m=+22.228857515 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.535796 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:04.535775047 +0000 UTC m=+22.228931217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.535836 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.536017 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.536069 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:04.536062204 +0000 UTC m=+22.229218384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.636847 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.636903 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.637029 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.637047 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.637058 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.637075 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.637113 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.637123 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:04.637100208 +0000 UTC m=+22.330256388 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.637124 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:03 crc kubenswrapper[4814]: E0216 09:46:03.637197 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:04.6371792 +0000 UTC m=+22.330335380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.744220 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 09:41:02 +0000 UTC, rotation deadline is 2026-11-02 16:53:35.965437572 +0000 UTC Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.744295 4814 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6223h7m32.221144417s for next certificate rotation Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.851009 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.857903 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.862889 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.865827 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.877306 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.888785 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.905280 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.917564 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.929893 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6ec5bed3774f2f4c93eff84337879c9aed0eebeaa50aa2d34d0468246c03a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"message\\\":\\\"W0216 09:45:46.619900 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 09:45:46.620466 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771235146 cert, and key in /tmp/serving-cert-2704679380/serving-signer.crt, /tmp/serving-cert-2704679380/serving-signer.key\\\\nI0216 09:45:46.788321 1 observer_polling.go:159] Starting file observer\\\\nW0216 09:45:46.791068 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 09:45:46.791204 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:45:46.791815 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704679380/tls.crt::/tmp/serving-cert-2704679380/tls.key\\\\\\\"\\\\nF0216 09:45:46.971195 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.943246 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.952450 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:28:02.899316824 +0000 UTC Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.958916 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6ec5bed3774f2f4c93eff84337879c9aed0eebeaa50aa2d34d0468246c03a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"message\\\":\\\"W0216 09:45:46.619900 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 09:45:46.620466 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771235146 cert, and key in /tmp/serving-cert-2704679380/serving-signer.crt, /tmp/serving-cert-2704679380/serving-signer.key\\\\nI0216 09:45:46.788321 1 observer_polling.go:159] Starting file observer\\\\nW0216 09:45:46.791068 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 09:45:46.791204 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:45:46.791815 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704679380/tls.crt::/tmp/serving-cert-2704679380/tls.key\\\\\\\"\\\\nF0216 09:45:46.971195 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.975902 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:03 crc kubenswrapper[4814]: I0216 09:46:03.991493 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.005196 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.020928 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.034701 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.051332 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.066788 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.150556 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73"} Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.150634 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7bd1ca1af28fc247010a90240f5fad2883d7feb6d33d6bcf4cf88a96ae56f14d"} Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.154781 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520"} Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.154854 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d"} Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.154868 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3de32cfb3fe92b3e44152bded51e03a08b03d9f3cb40b2dc8111f7feef628516"} Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.157672 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.158178 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.160336 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602" exitCode=255 Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.160414 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602"} Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.160483 4814 scope.go:117] "RemoveContainer" containerID="0a6ec5bed3774f2f4c93eff84337879c9aed0eebeaa50aa2d34d0468246c03a6" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.161334 4814 scope.go:117] "RemoveContainer" containerID="6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602" Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.161567 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.162778 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7a5c738141caad00c8e3446f1f973e0ebb4983195012009916c5d4eb8fd02b99"} Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.172503 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.188164 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.210779 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.227176 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.242265 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.260066 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6ec5bed3774f2f4c93eff84337879c9aed0eebeaa50aa2d34d0468246c03a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"message\\\":\\\"W0216 09:45:46.619900 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 09:45:46.620466 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771235146 cert, and key in /tmp/serving-cert-2704679380/serving-signer.crt, /tmp/serving-cert-2704679380/serving-signer.key\\\\nI0216 09:45:46.788321 1 observer_polling.go:159] Starting file observer\\\\nW0216 09:45:46.791068 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 09:45:46.791204 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:45:46.791815 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704679380/tls.crt::/tmp/serving-cert-2704679380/tls.key\\\\\\\"\\\\nF0216 09:45:46.971195 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.289761 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.318168 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.322013 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tq9bc"] Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.322501 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tq9bc" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.324179 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.324434 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.324943 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.360242 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.391936 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.418197 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.444488 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5f70113-f984-41a9-abda-7b1e787395d8-hosts-file\") pod \"node-resolver-tq9bc\" (UID: \"f5f70113-f984-41a9-abda-7b1e787395d8\") " pod="openshift-dns/node-resolver-tq9bc" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.444576 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfgg6\" (UniqueName: \"kubernetes.io/projected/f5f70113-f984-41a9-abda-7b1e787395d8-kube-api-access-mfgg6\") pod \"node-resolver-tq9bc\" (UID: \"f5f70113-f984-41a9-abda-7b1e787395d8\") " pod="openshift-dns/node-resolver-tq9bc" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.462199 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.502118 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.545226 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.545334 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.545366 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfgg6\" (UniqueName: \"kubernetes.io/projected/f5f70113-f984-41a9-abda-7b1e787395d8-kube-api-access-mfgg6\") pod \"node-resolver-tq9bc\" (UID: \"f5f70113-f984-41a9-abda-7b1e787395d8\") " pod="openshift-dns/node-resolver-tq9bc" Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.545389 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:46:06.545364812 +0000 UTC m=+24.238520992 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.545430 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.545455 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5f70113-f984-41a9-abda-7b1e787395d8-hosts-file\") pod \"node-resolver-tq9bc\" (UID: \"f5f70113-f984-41a9-abda-7b1e787395d8\") " pod="openshift-dns/node-resolver-tq9bc" Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.545484 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.545557 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5f70113-f984-41a9-abda-7b1e787395d8-hosts-file\") pod \"node-resolver-tq9bc\" (UID: \"f5f70113-f984-41a9-abda-7b1e787395d8\") " pod="openshift-dns/node-resolver-tq9bc" Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.545594 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:06.545584379 +0000 UTC m=+24.238740559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.545699 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.545821 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:06.545792624 +0000 UTC m=+24.238948854 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.546900 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.560330 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.571636 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfgg6\" (UniqueName: \"kubernetes.io/projected/f5f70113-f984-41a9-abda-7b1e787395d8-kube-api-access-mfgg6\") pod \"node-resolver-tq9bc\" (UID: \"f5f70113-f984-41a9-abda-7b1e787395d8\") " pod="openshift-dns/node-resolver-tq9bc" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.593737 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6ec5bed3774f2f4c93eff84337879c9aed0eebeaa50aa2d34d0468246c03a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"message\\\":\\\"W0216 09:45:46.619900 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 09:45:46.620466 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771235146 cert, and key in /tmp/serving-cert-2704679380/serving-signer.crt, /tmp/serving-cert-2704679380/serving-signer.key\\\\nI0216 09:45:46.788321 1 observer_polling.go:159] Starting file observer\\\\nW0216 09:45:46.791068 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 09:45:46.791204 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:45:46.791815 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704679380/tls.crt::/tmp/serving-cert-2704679380/tls.key\\\\\\\"\\\\nF0216 09:45:46.971195 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.615213 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.633167 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.636318 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tq9bc" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.646459 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.646548 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.646710 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.646738 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.646752 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.646803 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:06.646786077 +0000 UTC m=+24.339942257 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.646859 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.646874 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.646883 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.646935 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:06.646902 +0000 UTC m=+24.340058180 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:04 crc kubenswrapper[4814]: W0216 09:46:04.648953 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5f70113_f984_41a9_abda_7b1e787395d8.slice/crio-a90bb48dfb1016c43a89faeb1cd2dadb1a664a6e213c080d82c0ebdffe222e8d WatchSource:0}: Error finding container a90bb48dfb1016c43a89faeb1cd2dadb1a664a6e213c080d82c0ebdffe222e8d: Status 404 returned error can't find the container with id a90bb48dfb1016c43a89faeb1cd2dadb1a664a6e213c080d82c0ebdffe222e8d Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.660037 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.684758 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6ec5bed3774f2f4c93eff84337879c9aed0eebeaa50aa2d34d0468246c03a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"message\\\":\\\"W0216 09:45:46.619900 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 09:45:46.620466 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771235146 cert, and key in /tmp/serving-cert-2704679380/serving-signer.crt, /tmp/serving-cert-2704679380/serving-signer.key\\\\nI0216 09:45:46.788321 1 observer_polling.go:159] Starting file observer\\\\nW0216 09:45:46.791068 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 09:45:46.791204 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:45:46.791815 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704679380/tls.crt::/tmp/serving-cert-2704679380/tls.key\\\\\\\"\\\\nF0216 09:45:46.971195 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.704464 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.721923 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.736127 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.750183 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.769117 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.784298 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gwtrg"] Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.784676 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.785024 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kb2xj"] Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.785659 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wt4c2"] Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.786003 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.786412 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:04 crc kubenswrapper[4814]: W0216 09:46:04.788891 4814 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.788952 4814 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 09:46:04 crc kubenswrapper[4814]: W0216 09:46:04.788995 4814 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.789007 4814 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.789063 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.789109 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.789212 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ghlbk"] Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.789385 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.789563 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.789639 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.789795 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.789859 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 09:46:04 crc kubenswrapper[4814]: W0216 09:46:04.789794 4814 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.789859 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.789944 4814 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.789906 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.790117 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.793000 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.793014 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.793168 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.793326 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.793364 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.793328 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.793477 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.805421 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.822649 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.840894 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.853125 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.868152 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.885577 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.899320 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.913255 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.932917 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.949097 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-hostroot\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.949469 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-daemon-config\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.949641 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22f17e0b-afd9-459b-8451-f247a3c76a74-proxy-tls\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.949775 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-os-release\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.949949 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-system-cni-dir\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.950075 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-etc-kubernetes\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.950201 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-systemd\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.950341 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-env-overrides\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.950463 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-cnibin\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.950601 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-cnibin\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.950729 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-var-lib-cni-multus\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.950849 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-conf-dir\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.950964 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-netd\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.951096 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-cni-binary-copy\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.951221 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-cni-dir\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.951329 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-run-k8s-cni-cncf-io\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.951443 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-var-lib-kubelet\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.951575 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-system-cni-dir\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.951704 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-var-lib-cni-bin\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.951819 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-openvswitch\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.951962 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-node-log\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.952107 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-etc-openvswitch\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.952215 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-log-socket\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.952336 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-bin\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.952464 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.952606 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-script-lib\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.953101 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovn-node-metrics-cert\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.953223 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22f17e0b-afd9-459b-8451-f247a3c76a74-mcd-auth-proxy-config\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.953340 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xld8\" (UniqueName: \"kubernetes.io/projected/22f17e0b-afd9-459b-8451-f247a3c76a74-kube-api-access-5xld8\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.953448 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a89b210e-c736-4ca5-be0a-0044be5e577b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.953614 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-run-multus-certs\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.953741 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22f17e0b-afd9-459b-8451-f247a3c76a74-rootfs\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.953896 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-os-release\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.954019 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtjxs\" (UniqueName: \"kubernetes.io/projected/53ed6503-5c40-4a82-985c-dc46bc5daaed-kube-api-access-mtjxs\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.954172 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-systemd-units\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.954290 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-ovn\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.954413 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-ovn-kubernetes\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.954576 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.954719 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-slash\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.954848 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-var-lib-openvswitch\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.954974 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lftp\" (UniqueName: \"kubernetes.io/projected/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-kube-api-access-2lftp\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.955078 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-kubelet\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.955197 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-netns\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.955312 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-config\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.955400 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a89b210e-c736-4ca5-be0a-0044be5e577b-cni-binary-copy\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.955508 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkfmt\" (UniqueName: \"kubernetes.io/projected/a89b210e-c736-4ca5-be0a-0044be5e577b-kube-api-access-rkfmt\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.955656 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-socket-dir-parent\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.955777 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-run-netns\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.956759 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6ec5bed3774f2f4c93eff84337879c9aed0eebeaa50aa2d34d0468246c03a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"message\\\":\\\"W0216 09:45:46.619900 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 09:45:46.620466 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771235146 cert, and key in /tmp/serving-cert-2704679380/serving-signer.crt, /tmp/serving-cert-2704679380/serving-signer.key\\\\nI0216 09:45:46.788321 1 observer_polling.go:159] Starting file observer\\\\nW0216 09:45:46.791068 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 09:45:46.791204 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:45:46.791815 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704679380/tls.crt::/tmp/serving-cert-2704679380/tls.key\\\\\\\"\\\\nF0216 09:45:46.971195 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.957310 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:39:24.94639582 +0000 UTC Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.971232 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.993211 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.993256 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.993902 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.993959 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.993402 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:04 crc kubenswrapper[4814]: E0216 09:46:04.994356 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.994774 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:04Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.997957 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 09:46:04 crc kubenswrapper[4814]: I0216 09:46:04.999109 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.000734 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.001504 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.002668 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.003353 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.004206 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.005625 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.006578 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.007969 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.008729 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.010363 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.011196 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.011696 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.012802 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.014771 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.015846 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.017370 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.018056 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.018874 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.024284 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.025343 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.026816 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.027102 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.027725 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.028936 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.029831 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.030914 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.032807 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.033874 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.035387 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.036241 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.037854 4814 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.038099 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.040188 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.041648 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.042577 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.045080 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.046208 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.047208 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.048329 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.051745 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.053198 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.054714 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.055605 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057031 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057366 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-run-netns\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057414 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-socket-dir-parent\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057454 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22f17e0b-afd9-459b-8451-f247a3c76a74-proxy-tls\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057477 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-os-release\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057493 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-hostroot\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057507 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-daemon-config\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057522 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-system-cni-dir\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057558 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-etc-kubernetes\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057574 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-systemd\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057588 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-env-overrides\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057603 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-cnibin\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057623 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-var-lib-cni-multus\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057639 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-cnibin\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057653 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-conf-dir\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057669 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-netd\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057683 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-cni-binary-copy\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057697 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-run-k8s-cni-cncf-io\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057711 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-var-lib-kubelet\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057725 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-cni-dir\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057742 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-system-cni-dir\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057757 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-node-log\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057777 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-var-lib-cni-bin\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057792 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-openvswitch\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057815 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-etc-openvswitch\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057828 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-log-socket\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057841 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-bin\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057858 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057878 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovn-node-metrics-cert\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057891 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-script-lib\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057906 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a89b210e-c736-4ca5-be0a-0044be5e577b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057922 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-run-multus-certs\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057941 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22f17e0b-afd9-459b-8451-f247a3c76a74-mcd-auth-proxy-config\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057956 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xld8\" (UniqueName: \"kubernetes.io/projected/22f17e0b-afd9-459b-8451-f247a3c76a74-kube-api-access-5xld8\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057973 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtjxs\" (UniqueName: \"kubernetes.io/projected/53ed6503-5c40-4a82-985c-dc46bc5daaed-kube-api-access-mtjxs\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.057988 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22f17e0b-afd9-459b-8451-f247a3c76a74-rootfs\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058004 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-os-release\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058019 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-ovn\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058033 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-ovn-kubernetes\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058047 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058062 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-systemd-units\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058084 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-var-lib-openvswitch\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058100 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-slash\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058114 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-kubelet\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058131 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-netns\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058149 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-config\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058163 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a89b210e-c736-4ca5-be0a-0044be5e577b-cni-binary-copy\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058190 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkfmt\" (UniqueName: \"kubernetes.io/projected/a89b210e-c736-4ca5-be0a-0044be5e577b-kube-api-access-rkfmt\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058213 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lftp\" (UniqueName: \"kubernetes.io/projected/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-kube-api-access-2lftp\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058365 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-cni-dir\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058451 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-run-netns\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058486 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-socket-dir-parent\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058576 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-system-cni-dir\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058617 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-node-log\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058642 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-var-lib-cni-bin\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058665 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-openvswitch\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058686 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-etc-openvswitch\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058709 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-ovn\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.058736 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-ovn-kubernetes\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.059305 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-os-release\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.059360 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.059581 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a89b210e-c736-4ca5-be0a-0044be5e577b-cnibin\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.059594 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-os-release\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.059606 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-systemd-units\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.059622 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-hostroot\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.059631 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-var-lib-openvswitch\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.059665 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-slash\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.059680 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-kubelet\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.059695 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-netns\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.060467 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-daemon-config\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.060505 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-config\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.060587 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-system-cni-dir\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.060617 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-etc-kubernetes\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.060650 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-systemd\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.060655 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-run-multus-certs\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061077 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-env-overrides\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061117 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-log-socket\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061135 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a89b210e-c736-4ca5-be0a-0044be5e577b-cni-binary-copy\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061481 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-netd\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061570 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-var-lib-cni-multus\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061577 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-bin\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061626 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/22f17e0b-afd9-459b-8451-f247a3c76a74-rootfs\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061665 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061698 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-run-k8s-cni-cncf-io\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061714 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-multus-conf-dir\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061740 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a89b210e-c736-4ca5-be0a-0044be5e577b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061714 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-host-var-lib-kubelet\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061754 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-cnibin\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.061789 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22f17e0b-afd9-459b-8451-f247a3c76a74-mcd-auth-proxy-config\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.062139 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-cni-binary-copy\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.062597 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-script-lib\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.063007 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.064800 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.065490 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.066603 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.067740 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.068783 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.069273 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovn-node-metrics-cert\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.069335 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.069871 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.073203 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22f17e0b-afd9-459b-8451-f247a3c76a74-proxy-tls\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.073357 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.074071 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.075573 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.076447 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.076965 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.086189 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xld8\" (UniqueName: \"kubernetes.io/projected/22f17e0b-afd9-459b-8451-f247a3c76a74-kube-api-access-5xld8\") pod \"machine-config-daemon-wt4c2\" (UID: \"22f17e0b-afd9-459b-8451-f247a3c76a74\") " pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.086356 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtjxs\" (UniqueName: \"kubernetes.io/projected/53ed6503-5c40-4a82-985c-dc46bc5daaed-kube-api-access-mtjxs\") pod \"ovnkube-node-ghlbk\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.094505 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.109780 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.117043 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.133478 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.144867 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.147228 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.169387 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tq9bc" event={"ID":"f5f70113-f984-41a9-abda-7b1e787395d8","Type":"ContainerStarted","Data":"6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448"} Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.169437 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tq9bc" event={"ID":"f5f70113-f984-41a9-abda-7b1e787395d8","Type":"ContainerStarted","Data":"a90bb48dfb1016c43a89faeb1cd2dadb1a664a6e213c080d82c0ebdffe222e8d"} Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.176222 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.177111 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a6ec5bed3774f2f4c93eff84337879c9aed0eebeaa50aa2d34d0468246c03a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"message\\\":\\\"W0216 09:45:46.619900 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 09:45:46.620466 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771235146 cert, and key in /tmp/serving-cert-2704679380/serving-signer.crt, /tmp/serving-cert-2704679380/serving-signer.key\\\\nI0216 09:45:46.788321 1 observer_polling.go:159] Starting file observer\\\\nW0216 09:45:46.791068 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 09:45:46.791204 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:45:46.791815 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2704679380/tls.crt::/tmp/serving-cert-2704679380/tls.key\\\\\\\"\\\\nF0216 09:45:46.971195 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.183575 4814 scope.go:117] "RemoveContainer" containerID="6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602" Feb 16 09:46:05 crc kubenswrapper[4814]: E0216 09:46:05.183723 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.187516 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"14a9006b5b46a579222853a23cc353fcf3bd97adbe0e982fdf70e74019038ac9"} Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.190871 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"09a77d9b575ac0bfc1136ff2a04a970188c65c5c61a1948c2459537bb7889564"} Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.202471 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.293992 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.350642 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.380687 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.400377 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.420616 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.438042 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.453432 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.468968 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.488578 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.502635 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.520045 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.537324 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.551633 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.572092 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:05Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.957556 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:04:57.161566501 +0000 UTC Feb 16 09:46:05 crc kubenswrapper[4814]: I0216 09:46:05.971554 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.079063 4814 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.079079 4814 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.080806 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.090079 4814 projected.go:194] Error preparing data for projected volume kube-api-access-rkfmt for pod openshift-multus/multus-additional-cni-plugins-kb2xj: failed to sync configmap cache: timed out waiting for the condition Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.090111 4814 projected.go:194] Error preparing data for projected volume kube-api-access-2lftp for pod openshift-multus/multus-gwtrg: failed to sync configmap cache: timed out waiting for the condition Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.090215 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a89b210e-c736-4ca5-be0a-0044be5e577b-kube-api-access-rkfmt podName:a89b210e-c736-4ca5-be0a-0044be5e577b nodeName:}" failed. No retries permitted until 2026-02-16 09:46:06.590182713 +0000 UTC m=+24.283338893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rkfmt" (UniqueName: "kubernetes.io/projected/a89b210e-c736-4ca5-be0a-0044be5e577b-kube-api-access-rkfmt") pod "multus-additional-cni-plugins-kb2xj" (UID: "a89b210e-c736-4ca5-be0a-0044be5e577b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.090280 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-kube-api-access-2lftp podName:419c1fde-3a56-45c4-b6aa-5c5b8cde8db6 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:06.590257884 +0000 UTC m=+24.283414064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2lftp" (UniqueName: "kubernetes.io/projected/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-kube-api-access-2lftp") pod "multus-gwtrg" (UID: "419c1fde-3a56-45c4-b6aa-5c5b8cde8db6") : failed to sync configmap cache: timed out waiting for the condition Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.195868 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b"} Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.197799 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220"} Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.197862 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a"} Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.199510 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c" exitCode=0 Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.200059 4814 scope.go:117] "RemoveContainer" containerID="6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602" Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.200256 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.200925 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c"} Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.217446 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.233106 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.248969 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.270188 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.272280 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.286846 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.307409 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.321401 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.339983 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.358914 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.377479 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.394579 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.408386 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.440781 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.455305 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.474045 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.498434 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.511604 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.522291 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.548183 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.563137 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.577107 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.577336 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:46:10.577296219 +0000 UTC m=+28.270452409 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.577484 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.577640 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.577701 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.577928 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:10.577908314 +0000 UTC m=+28.271064494 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.577790 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.578064 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:10.578056157 +0000 UTC m=+28.271212327 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.582475 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.596935 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.612508 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.625605 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.639875 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.653055 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.678748 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkfmt\" (UniqueName: \"kubernetes.io/projected/a89b210e-c736-4ca5-be0a-0044be5e577b-kube-api-access-rkfmt\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.678977 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.679069 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lftp\" (UniqueName: \"kubernetes.io/projected/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-kube-api-access-2lftp\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.679199 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.679379 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.679467 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.679547 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.679650 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:10.679635165 +0000 UTC m=+28.372791345 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.679953 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.680020 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.680049 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.680204 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:10.680169348 +0000 UTC m=+28.373325568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.706603 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lftp\" (UniqueName: \"kubernetes.io/projected/419c1fde-3a56-45c4-b6aa-5c5b8cde8db6-kube-api-access-2lftp\") pod \"multus-gwtrg\" (UID: \"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\") " pod="openshift-multus/multus-gwtrg" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.706603 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkfmt\" (UniqueName: \"kubernetes.io/projected/a89b210e-c736-4ca5-be0a-0044be5e577b-kube-api-access-rkfmt\") pod \"multus-additional-cni-plugins-kb2xj\" (UID: \"a89b210e-c736-4ca5-be0a-0044be5e577b\") " pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.732668 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-rb5nq"] Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.733077 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rb5nq" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.734924 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.734956 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.735130 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.736065 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.749317 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.764623 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.778978 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.780255 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/29aff2bc-2aaa-4c9b-9d49-3d12395ec125-serviceca\") pod \"node-ca-rb5nq\" (UID: \"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\") " pod="openshift-image-registry/node-ca-rb5nq" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.780383 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29aff2bc-2aaa-4c9b-9d49-3d12395ec125-host\") pod \"node-ca-rb5nq\" (UID: \"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\") " pod="openshift-image-registry/node-ca-rb5nq" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.780517 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdr7c\" (UniqueName: \"kubernetes.io/projected/29aff2bc-2aaa-4c9b-9d49-3d12395ec125-kube-api-access-jdr7c\") pod \"node-ca-rb5nq\" (UID: \"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\") " pod="openshift-image-registry/node-ca-rb5nq" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.802939 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.854787 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.881714 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdr7c\" (UniqueName: \"kubernetes.io/projected/29aff2bc-2aaa-4c9b-9d49-3d12395ec125-kube-api-access-jdr7c\") pod \"node-ca-rb5nq\" (UID: \"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\") " pod="openshift-image-registry/node-ca-rb5nq" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.881992 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/29aff2bc-2aaa-4c9b-9d49-3d12395ec125-serviceca\") pod \"node-ca-rb5nq\" (UID: \"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\") " pod="openshift-image-registry/node-ca-rb5nq" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.882245 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29aff2bc-2aaa-4c9b-9d49-3d12395ec125-host\") pod \"node-ca-rb5nq\" (UID: \"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\") " pod="openshift-image-registry/node-ca-rb5nq" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.882383 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/29aff2bc-2aaa-4c9b-9d49-3d12395ec125-host\") pod \"node-ca-rb5nq\" (UID: \"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\") " pod="openshift-image-registry/node-ca-rb5nq" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.883712 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/29aff2bc-2aaa-4c9b-9d49-3d12395ec125-serviceca\") pod \"node-ca-rb5nq\" (UID: \"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\") " pod="openshift-image-registry/node-ca-rb5nq" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.886743 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.903181 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gwtrg" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.912655 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdr7c\" (UniqueName: \"kubernetes.io/projected/29aff2bc-2aaa-4c9b-9d49-3d12395ec125-kube-api-access-jdr7c\") pod \"node-ca-rb5nq\" (UID: \"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\") " pod="openshift-image-registry/node-ca-rb5nq" Feb 16 09:46:06 crc kubenswrapper[4814]: W0216 09:46:06.916502 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod419c1fde_3a56_45c4_b6aa_5c5b8cde8db6.slice/crio-07ea24190523d782de76ed7a619bc6851c9fda78dd43c0db53988a59265159ce WatchSource:0}: Error finding container 07ea24190523d782de76ed7a619bc6851c9fda78dd43c0db53988a59265159ce: Status 404 returned error can't find the container with id 07ea24190523d782de76ed7a619bc6851c9fda78dd43c0db53988a59265159ce Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.934936 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.946685 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.958197 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:43:16.22058723 +0000 UTC Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.991892 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:06Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.992959 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.993103 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.993387 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.993446 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:06 crc kubenswrapper[4814]: I0216 09:46:06.993489 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:06 crc kubenswrapper[4814]: E0216 09:46:06.993549 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.025884 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.065739 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.110762 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.149997 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.183958 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.199688 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-rb5nq" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.206858 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" event={"ID":"a89b210e-c736-4ca5-be0a-0044be5e577b","Type":"ContainerStarted","Data":"c9e6b9801e7a46631ce0663722df6fc710de489fd4a91e2e724537599ce778dc"} Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.212444 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwtrg" event={"ID":"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6","Type":"ContainerStarted","Data":"e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919"} Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.212632 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwtrg" event={"ID":"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6","Type":"ContainerStarted","Data":"07ea24190523d782de76ed7a619bc6851c9fda78dd43c0db53988a59265159ce"} Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.217437 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c"} Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.217489 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2"} Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.217502 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3"} Feb 16 09:46:07 crc kubenswrapper[4814]: W0216 09:46:07.230517 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29aff2bc_2aaa_4c9b_9d49_3d12395ec125.slice/crio-43949dc9acdb3a333c6c617e94484257dfc11542b28a27d4b804e1caa54be4b9 WatchSource:0}: Error finding container 43949dc9acdb3a333c6c617e94484257dfc11542b28a27d4b804e1caa54be4b9: Status 404 returned error can't find the container with id 43949dc9acdb3a333c6c617e94484257dfc11542b28a27d4b804e1caa54be4b9 Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.240748 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.264938 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.303978 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.350871 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.388213 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.436690 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.471312 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.506361 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.541610 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.583637 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.623092 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.664289 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.702340 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.742280 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.788114 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:07Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:07 crc kubenswrapper[4814]: I0216 09:46:07.959260 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:14:35.520359988 +0000 UTC Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.223221 4814 generic.go:334] "Generic (PLEG): container finished" podID="a89b210e-c736-4ca5-be0a-0044be5e577b" containerID="ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88" exitCode=0 Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.223719 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" event={"ID":"a89b210e-c736-4ca5-be0a-0044be5e577b","Type":"ContainerDied","Data":"ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88"} Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.225897 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rb5nq" event={"ID":"29aff2bc-2aaa-4c9b-9d49-3d12395ec125","Type":"ContainerStarted","Data":"d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb"} Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.226197 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-rb5nq" event={"ID":"29aff2bc-2aaa-4c9b-9d49-3d12395ec125","Type":"ContainerStarted","Data":"43949dc9acdb3a333c6c617e94484257dfc11542b28a27d4b804e1caa54be4b9"} Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.228662 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26"} Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.228683 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692"} Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.228693 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38"} Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.241968 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.263075 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.287714 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.303028 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.321604 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.375971 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.392383 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.415360 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.432788 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.448858 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.464805 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.481125 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.492077 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.507249 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.520277 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.535708 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.558573 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.575480 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.591632 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.607225 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.620133 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.665800 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.703487 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.743629 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.786399 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.822526 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.861013 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.911862 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:08Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.960434 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:56:43.403170644 +0000 UTC Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.992905 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.992975 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:08 crc kubenswrapper[4814]: I0216 09:46:08.992918 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:08 crc kubenswrapper[4814]: E0216 09:46:08.993132 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:08 crc kubenswrapper[4814]: E0216 09:46:08.993262 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:08 crc kubenswrapper[4814]: E0216 09:46:08.993356 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.109293 4814 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.112437 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.112482 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.112494 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.112703 4814 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.123688 4814 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.124094 4814 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.125479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.125526 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.125560 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.125581 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.125596 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: E0216 09:46:09.139599 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.143919 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.143955 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.143965 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.143982 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.143994 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: E0216 09:46:09.160758 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.169185 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.169239 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.169254 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.169275 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.169291 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: E0216 09:46:09.184396 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.189937 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.189986 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.189998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.190019 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.190032 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: E0216 09:46:09.202991 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.206603 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.206658 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.206673 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.206696 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.206715 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: E0216 09:46:09.224133 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: E0216 09:46:09.224414 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.226741 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.226799 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.226819 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.226850 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.226878 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.233405 4814 generic.go:334] "Generic (PLEG): container finished" podID="a89b210e-c736-4ca5-be0a-0044be5e577b" containerID="972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5" exitCode=0 Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.233491 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" event={"ID":"a89b210e-c736-4ca5-be0a-0044be5e577b","Type":"ContainerDied","Data":"972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5"} Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.252354 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.267331 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.281878 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.297045 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.312351 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.325554 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.329687 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.329735 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.329751 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.329772 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.329786 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.339545 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.353582 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.369996 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.386411 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.404002 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.426965 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.432791 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.432826 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.432836 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.432852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.432863 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.462762 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.509849 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:09Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.536211 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.536287 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.536302 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.536328 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.536347 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.640188 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.640265 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.640282 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.640311 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.640334 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.743493 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.743572 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.743588 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.743608 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.743623 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.846034 4814 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.846596 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.846665 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.846687 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.846716 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.846737 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.949180 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.949223 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.949235 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.949254 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.949269 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:09Z","lastTransitionTime":"2026-02-16T09:46:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:09 crc kubenswrapper[4814]: I0216 09:46:09.961228 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:34:24.022433272 +0000 UTC Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.092666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.092731 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.092747 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.092769 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.092785 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:10Z","lastTransitionTime":"2026-02-16T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.195197 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.195239 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.195254 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.195276 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.195291 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:10Z","lastTransitionTime":"2026-02-16T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.242407 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf"} Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.245757 4814 generic.go:334] "Generic (PLEG): container finished" podID="a89b210e-c736-4ca5-be0a-0044be5e577b" containerID="f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001" exitCode=0 Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.245799 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" event={"ID":"a89b210e-c736-4ca5-be0a-0044be5e577b","Type":"ContainerDied","Data":"f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001"} Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.253443 4814 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.263682 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.282965 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.298399 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.298440 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.298452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.298472 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.298484 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:10Z","lastTransitionTime":"2026-02-16T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.315918 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.330289 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.346816 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.363172 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.374375 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.390611 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.401511 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.401579 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.401592 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.401615 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.401627 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:10Z","lastTransitionTime":"2026-02-16T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.411793 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.427992 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.440936 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.463683 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.478872 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.504266 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.504947 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.504997 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.505009 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.505029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.505039 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:10Z","lastTransitionTime":"2026-02-16T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.608779 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.608868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.608884 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.608906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.608951 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:10Z","lastTransitionTime":"2026-02-16T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.627693 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.627891 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.627952 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.628087 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.628096 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.628165 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:18.62814834 +0000 UTC m=+36.321304530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.628198 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:18.628175291 +0000 UTC m=+36.321331481 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.628247 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:46:18.628238282 +0000 UTC m=+36.321394482 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.712601 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.712705 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.712726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.712759 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.712780 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:10Z","lastTransitionTime":"2026-02-16T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.729433 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.729557 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.729706 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.729747 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.729759 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.729773 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.729805 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.729822 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.729836 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:18.72981643 +0000 UTC m=+36.422972610 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.729892 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:18.729867411 +0000 UTC m=+36.423023801 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.816185 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.816257 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.816281 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.816310 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.816330 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:10Z","lastTransitionTime":"2026-02-16T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.919949 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.920029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.920048 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.920076 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.920097 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:10Z","lastTransitionTime":"2026-02-16T09:46:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.961642 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:14:12.282045953 +0000 UTC Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.993249 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.993284 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:10 crc kubenswrapper[4814]: I0216 09:46:10.993302 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.993452 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.993560 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:10 crc kubenswrapper[4814]: E0216 09:46:10.993747 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.023567 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.023641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.023672 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.023702 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.023725 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:11Z","lastTransitionTime":"2026-02-16T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.127028 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.127091 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.127106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.127130 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.127147 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:11Z","lastTransitionTime":"2026-02-16T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.197047 4814 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.230004 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.230066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.230074 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.230095 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.230111 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:11Z","lastTransitionTime":"2026-02-16T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.253602 4814 generic.go:334] "Generic (PLEG): container finished" podID="a89b210e-c736-4ca5-be0a-0044be5e577b" containerID="5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9" exitCode=0 Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.253662 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" event={"ID":"a89b210e-c736-4ca5-be0a-0044be5e577b","Type":"ContainerDied","Data":"5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9"} Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.275405 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.290381 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.319145 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.337315 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.337364 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.337375 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.337394 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.337408 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:11Z","lastTransitionTime":"2026-02-16T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.338215 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.352562 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.366898 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.379566 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.397395 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.418335 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.435276 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.439749 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.439795 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.439807 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.439829 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.439844 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:11Z","lastTransitionTime":"2026-02-16T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.448525 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.463405 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.476507 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.494861 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:11Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.542470 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.542518 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.542563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.542587 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.542602 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:11Z","lastTransitionTime":"2026-02-16T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.644907 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.644963 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.644976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.644998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.645013 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:11Z","lastTransitionTime":"2026-02-16T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.747890 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.747936 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.747945 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.747962 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.747978 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:11Z","lastTransitionTime":"2026-02-16T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.851975 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.852049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.852069 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.852097 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.852118 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:11Z","lastTransitionTime":"2026-02-16T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.956337 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.956391 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.956404 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.956426 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.956438 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:11Z","lastTransitionTime":"2026-02-16T09:46:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:11 crc kubenswrapper[4814]: I0216 09:46:11.961839 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:56:36.952202589 +0000 UTC Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.061930 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.062419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.062439 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.062464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.062481 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:12Z","lastTransitionTime":"2026-02-16T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.165496 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.165556 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.165568 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.165585 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.165596 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:12Z","lastTransitionTime":"2026-02-16T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.263327 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518"} Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.263897 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.268290 4814 generic.go:334] "Generic (PLEG): container finished" podID="a89b210e-c736-4ca5-be0a-0044be5e577b" containerID="651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b" exitCode=0 Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.268392 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.268417 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.268428 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.268443 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.268455 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:12Z","lastTransitionTime":"2026-02-16T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.268363 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" event={"ID":"a89b210e-c736-4ca5-be0a-0044be5e577b","Type":"ContainerDied","Data":"651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b"} Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.289193 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.313428 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.322598 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.344948 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.361192 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.371891 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.371982 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.372010 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.372048 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.372075 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:12Z","lastTransitionTime":"2026-02-16T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.383558 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.403271 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.422797 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.445335 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.468924 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.476811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.476891 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.476910 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.476936 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.476954 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:12Z","lastTransitionTime":"2026-02-16T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.497101 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.517098 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.532648 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.546164 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.558976 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.574047 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.578861 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.578894 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.578905 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.578923 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.578935 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:12Z","lastTransitionTime":"2026-02-16T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.591579 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.607639 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.619735 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.635031 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.648926 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.666522 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.677460 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.682964 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.682998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.683008 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.683029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.683040 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:12Z","lastTransitionTime":"2026-02-16T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.692942 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.711025 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.729610 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.754491 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.768204 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.785921 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.785988 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.786011 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.786041 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.786058 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:12Z","lastTransitionTime":"2026-02-16T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.791114 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:12Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.889341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.889438 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.889522 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.889597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.889622 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:12Z","lastTransitionTime":"2026-02-16T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.962159 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:29:04.79588641 +0000 UTC Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.992634 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.992670 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.992635 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:12 crc kubenswrapper[4814]: E0216 09:46:12.992802 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:12 crc kubenswrapper[4814]: E0216 09:46:12.992959 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.992980 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.993046 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.993067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.993094 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:12 crc kubenswrapper[4814]: E0216 09:46:12.993112 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:12 crc kubenswrapper[4814]: I0216 09:46:12.993114 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:12Z","lastTransitionTime":"2026-02-16T09:46:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.011309 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.033350 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.057194 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.073756 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.090822 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.095876 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.095924 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.095936 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.095955 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.095970 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:13Z","lastTransitionTime":"2026-02-16T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.111394 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.126404 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.141620 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.157302 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.172107 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.186614 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.198951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.199760 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.199804 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.199829 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.199843 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:13Z","lastTransitionTime":"2026-02-16T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.201314 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.215092 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.226770 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.276139 4814 generic.go:334] "Generic (PLEG): container finished" podID="a89b210e-c736-4ca5-be0a-0044be5e577b" containerID="56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b" exitCode=0 Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.276286 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.276819 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" event={"ID":"a89b210e-c736-4ca5-be0a-0044be5e577b","Type":"ContainerDied","Data":"56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b"} Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.277116 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.293300 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.302961 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.302990 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.303001 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.303020 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.303032 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:13Z","lastTransitionTime":"2026-02-16T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.309670 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.310211 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.327676 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.344484 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.359241 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.376068 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.391645 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.407910 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.407930 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.408078 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.408091 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.408111 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.408129 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:13Z","lastTransitionTime":"2026-02-16T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.422798 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.438381 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.455519 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.469175 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.482067 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.506458 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.511841 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.511880 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.511891 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.511969 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.511986 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:13Z","lastTransitionTime":"2026-02-16T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.519707 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.536938 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.551377 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.563667 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.578719 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.592152 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.603228 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.614714 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.614753 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.614766 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.614787 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.614801 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:13Z","lastTransitionTime":"2026-02-16T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.618033 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.638396 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.651815 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.663725 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.689880 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.713166 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.717151 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.717188 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.717200 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.717217 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.717228 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:13Z","lastTransitionTime":"2026-02-16T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.728478 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.820190 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.820253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.820316 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.820338 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.820349 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:13Z","lastTransitionTime":"2026-02-16T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.923040 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.923415 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.923587 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.923746 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.923845 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:13Z","lastTransitionTime":"2026-02-16T09:46:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:13 crc kubenswrapper[4814]: I0216 09:46:13.962620 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:35:53.743823899 +0000 UTC Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.027740 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.027822 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.027846 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.027874 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.027892 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:14Z","lastTransitionTime":"2026-02-16T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.131694 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.131752 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.131765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.131792 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.131813 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:14Z","lastTransitionTime":"2026-02-16T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.234530 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.234643 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.234661 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.234691 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.234715 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:14Z","lastTransitionTime":"2026-02-16T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.288352 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" event={"ID":"a89b210e-c736-4ca5-be0a-0044be5e577b","Type":"ContainerStarted","Data":"a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2"} Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.288498 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.308053 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.330452 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.337902 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.337998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.338024 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.338058 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.338082 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:14Z","lastTransitionTime":"2026-02-16T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.353478 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.369329 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.388720 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.411921 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.433771 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.441951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.441991 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.442001 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.442019 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.442030 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:14Z","lastTransitionTime":"2026-02-16T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.445916 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.460054 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.473335 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.506602 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.525497 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.544194 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.544559 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.544660 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.544750 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.544838 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:14Z","lastTransitionTime":"2026-02-16T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.544865 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.570013 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:14Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.648715 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.648999 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.649071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.649194 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.649282 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:14Z","lastTransitionTime":"2026-02-16T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.752466 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.752509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.752522 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.752565 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.752578 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:14Z","lastTransitionTime":"2026-02-16T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.855427 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.855484 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.855502 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.855525 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.855554 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:14Z","lastTransitionTime":"2026-02-16T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.958806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.958854 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.958868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.958890 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.958903 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:14Z","lastTransitionTime":"2026-02-16T09:46:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.965459 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:59:27.30370649 +0000 UTC Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.992797 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.992795 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:14 crc kubenswrapper[4814]: E0216 09:46:14.992986 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:14 crc kubenswrapper[4814]: I0216 09:46:14.993031 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:14 crc kubenswrapper[4814]: E0216 09:46:14.993139 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:14 crc kubenswrapper[4814]: E0216 09:46:14.993211 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.061591 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.061629 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.061642 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.061660 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.061673 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:15Z","lastTransitionTime":"2026-02-16T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.164764 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.164818 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.164830 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.164852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.164878 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:15Z","lastTransitionTime":"2026-02-16T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.266844 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.266880 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.266889 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.266905 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.266915 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:15Z","lastTransitionTime":"2026-02-16T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.290977 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.301974 4814 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.369512 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.369591 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.369603 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.369632 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.369647 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:15Z","lastTransitionTime":"2026-02-16T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.473008 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.473087 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.473111 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.473144 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.473169 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:15Z","lastTransitionTime":"2026-02-16T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.576199 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.576269 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.576282 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.576304 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.576316 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:15Z","lastTransitionTime":"2026-02-16T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.679794 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.679869 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.679889 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.679913 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.679930 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:15Z","lastTransitionTime":"2026-02-16T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.783096 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.783154 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.783173 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.783198 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.783216 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:15Z","lastTransitionTime":"2026-02-16T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.887401 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.887458 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.887470 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.887493 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.887506 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:15Z","lastTransitionTime":"2026-02-16T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.966042 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 06:59:03.54870332 +0000 UTC Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.990858 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.990915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.990929 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.990955 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:15 crc kubenswrapper[4814]: I0216 09:46:15.990979 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:15Z","lastTransitionTime":"2026-02-16T09:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.093849 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.093931 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.093950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.093981 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.094001 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:16Z","lastTransitionTime":"2026-02-16T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.197003 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.197064 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.197092 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.197121 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.197142 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:16Z","lastTransitionTime":"2026-02-16T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.303072 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.303142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.303161 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.303191 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.303210 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:16Z","lastTransitionTime":"2026-02-16T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.304516 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/0.log" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.308881 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518" exitCode=1 Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.308932 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518"} Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.309947 4814 scope.go:117] "RemoveContainer" containerID="c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.337657 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.361980 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.390110 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.416058 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.416162 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.416189 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.416227 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.416262 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:16Z","lastTransitionTime":"2026-02-16T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.459419 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.482199 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.506610 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:15Z\\\",\\\"message\\\":\\\" 6090 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:15.383373 6090 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:15.383386 6090 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:15.383403 6090 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09:46:15.383461 6090 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 09:46:15.383470 6090 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 09:46:15.383506 6090 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:15.383549 6090 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:15.383556 6090 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:15.383557 6090 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:15.383569 6090 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:15.383576 6090 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:15.383588 6090 factory.go:656] Stopping watch factory\\\\nI0216 09:46:15.383610 6090 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.518836 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.518899 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.518911 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.518932 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.518945 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:16Z","lastTransitionTime":"2026-02-16T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.521454 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.534924 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.556123 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.572769 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.587716 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.601978 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.617829 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.621916 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.621962 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.621979 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.621999 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.622015 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:16Z","lastTransitionTime":"2026-02-16T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.631727 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:16Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.725913 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.726030 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.726056 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.726097 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.726124 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:16Z","lastTransitionTime":"2026-02-16T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.830346 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.830419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.830437 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.830465 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.830487 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:16Z","lastTransitionTime":"2026-02-16T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.933675 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.933742 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.933757 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.934144 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.934190 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:16Z","lastTransitionTime":"2026-02-16T09:46:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.967165 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:06:57.988800323 +0000 UTC Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.992946 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.992979 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:16 crc kubenswrapper[4814]: I0216 09:46:16.992952 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:16 crc kubenswrapper[4814]: E0216 09:46:16.993138 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:16 crc kubenswrapper[4814]: E0216 09:46:16.993283 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:16 crc kubenswrapper[4814]: E0216 09:46:16.993390 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.036902 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.036941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.036950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.036968 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.036980 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:17Z","lastTransitionTime":"2026-02-16T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.139129 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.139185 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.139198 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.139216 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.139228 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:17Z","lastTransitionTime":"2026-02-16T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.241866 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.241934 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.241960 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.241990 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.242012 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:17Z","lastTransitionTime":"2026-02-16T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.314918 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/0.log" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.318446 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a"} Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.318664 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.339378 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.344197 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.344220 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.344228 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.344243 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.344254 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:17Z","lastTransitionTime":"2026-02-16T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.361973 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.379244 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.395393 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.412347 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.428358 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.447763 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.447808 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.447818 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.447836 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.447847 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:17Z","lastTransitionTime":"2026-02-16T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.456886 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.476394 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.494593 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.514824 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:15Z\\\",\\\"message\\\":\\\" 6090 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:15.383373 6090 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:15.383386 6090 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:15.383403 6090 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09:46:15.383461 6090 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 09:46:15.383470 6090 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 09:46:15.383506 6090 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:15.383549 6090 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:15.383556 6090 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:15.383557 6090 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:15.383569 6090 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:15.383576 6090 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:15.383588 6090 factory.go:656] Stopping watch factory\\\\nI0216 09:46:15.383610 6090 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.533498 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.550480 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.550741 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.550770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.550786 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.550810 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.550825 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:17Z","lastTransitionTime":"2026-02-16T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.565310 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.579682 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:17Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.654993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.655077 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.655100 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.655136 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.655159 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:17Z","lastTransitionTime":"2026-02-16T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.758933 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.758998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.759016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.759041 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.759065 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:17Z","lastTransitionTime":"2026-02-16T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.861749 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.861796 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.861807 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.861825 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.861834 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:17Z","lastTransitionTime":"2026-02-16T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.964516 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.964569 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.964581 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.964602 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.964613 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:17Z","lastTransitionTime":"2026-02-16T09:46:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:17 crc kubenswrapper[4814]: I0216 09:46:17.972394 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 22:13:45.567813409 +0000 UTC Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.068806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.068907 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.068931 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.068969 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.068993 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:18Z","lastTransitionTime":"2026-02-16T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.172650 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.172710 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.172727 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.172752 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.172770 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:18Z","lastTransitionTime":"2026-02-16T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.276244 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.276314 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.276331 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.276358 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.276382 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:18Z","lastTransitionTime":"2026-02-16T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.326003 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/1.log" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.326948 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/0.log" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.331036 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a" exitCode=1 Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.331088 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a"} Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.331141 4814 scope.go:117] "RemoveContainer" containerID="c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.332418 4814 scope.go:117] "RemoveContainer" containerID="17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a" Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.332730 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.366405 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:15Z\\\",\\\"message\\\":\\\" 6090 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:15.383373 6090 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:15.383386 6090 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:15.383403 6090 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09:46:15.383461 6090 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 09:46:15.383470 6090 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 09:46:15.383506 6090 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:15.383549 6090 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:15.383556 6090 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:15.383557 6090 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:15.383569 6090 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:15.383576 6090 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:15.383588 6090 factory.go:656] Stopping watch factory\\\\nI0216 09:46:15.383610 6090 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:17Z\\\",\\\"message\\\":\\\" 6264 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.520499 6264 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 09:46:17.520672 6264 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.521066 6264 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:17.521110 6264 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:17.521137 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:17.521180 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:17.521664 6264 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:17.521688 6264 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:17.521712 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:17.521716 6264 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:17.521727 6264 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:17.521734 6264 factory.go:656] Stopping watch factory\\\\nI0216 09:46:17.521750 6264 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.380097 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.380155 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.380175 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.380203 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.380222 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:18Z","lastTransitionTime":"2026-02-16T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.389796 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.410979 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.433598 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.456986 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.473660 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.483227 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.483316 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.483335 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.483370 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.483388 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:18Z","lastTransitionTime":"2026-02-16T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.494350 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.516240 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.516626 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992"] Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.517254 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.519924 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.520338 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.541452 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.558092 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.582830 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.588430 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.588510 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.588561 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.588609 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.588631 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:18Z","lastTransitionTime":"2026-02-16T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.613552 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.623567 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.623632 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.623680 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlpm\" (UniqueName: \"kubernetes.io/projected/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-kube-api-access-knlpm\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.623768 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.635387 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.656011 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.679047 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.696271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.696361 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.696384 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.696414 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.696436 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:18Z","lastTransitionTime":"2026-02-16T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.700011 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.721697 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.725106 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.725263 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.725325 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.725378 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.725428 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.725460 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.725494 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knlpm\" (UniqueName: \"kubernetes.io/projected/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-kube-api-access-knlpm\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.726649 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.726904 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.727024 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.726764 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:46:34.726745595 +0000 UTC m=+52.419901775 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.727156 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:34.727121334 +0000 UTC m=+52.420277594 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.727194 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:34.727176146 +0000 UTC m=+52.420332486 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.728327 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.734879 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.736696 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.748096 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knlpm\" (UniqueName: \"kubernetes.io/projected/3f0a84b8-4c95-425c-ba79-884d3bc65ca2-kube-api-access-knlpm\") pod \"ovnkube-control-plane-749d76644c-6d992\" (UID: \"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.751233 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.766874 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.787895 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.799097 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.799131 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.799140 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.799157 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.799166 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:18Z","lastTransitionTime":"2026-02-16T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.806584 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.821313 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.826200 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.826310 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.826460 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.826509 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.826519 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.826603 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.826624 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.826544 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.826697 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:34.826674602 +0000 UTC m=+52.519830822 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.826751 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:34.826730373 +0000 UTC m=+52.519886643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.835962 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.838946 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" Feb 16 09:46:18 crc kubenswrapper[4814]: W0216 09:46:18.857188 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f0a84b8_4c95_425c_ba79_884d3bc65ca2.slice/crio-eb90dda8b55ac04f91b430e5add3d4095b5e4f767714d05eb9ebe162af888c4a WatchSource:0}: Error finding container eb90dda8b55ac04f91b430e5add3d4095b5e4f767714d05eb9ebe162af888c4a: Status 404 returned error can't find the container with id eb90dda8b55ac04f91b430e5add3d4095b5e4f767714d05eb9ebe162af888c4a Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.865401 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:15Z\\\",\\\"message\\\":\\\" 6090 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:15.383373 6090 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:15.383386 6090 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:15.383403 6090 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09:46:15.383461 6090 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 09:46:15.383470 6090 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 09:46:15.383506 6090 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:15.383549 6090 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:15.383556 6090 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:15.383557 6090 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:15.383569 6090 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:15.383576 6090 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:15.383588 6090 factory.go:656] Stopping watch factory\\\\nI0216 09:46:15.383610 6090 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:17Z\\\",\\\"message\\\":\\\" 6264 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.520499 6264 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 09:46:17.520672 6264 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.521066 6264 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:17.521110 6264 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:17.521137 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:17.521180 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:17.521664 6264 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:17.521688 6264 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:17.521712 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:17.521716 6264 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:17.521727 6264 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:17.521734 6264 factory.go:656] Stopping watch factory\\\\nI0216 09:46:17.521750 6264 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.882155 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.902101 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.902147 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.902157 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.902348 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.902572 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:18Z","lastTransitionTime":"2026-02-16T09:46:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.903554 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.922830 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.936419 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:18Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.972818 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:51:26.738061049 +0000 UTC Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.993465 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.993666 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.993728 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.993924 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.994165 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:18 crc kubenswrapper[4814]: E0216 09:46:18.994263 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:18 crc kubenswrapper[4814]: I0216 09:46:18.994580 4814 scope.go:117] "RemoveContainer" containerID="6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.005992 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.006035 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.006045 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.006059 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.006069 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.139067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.139108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.139120 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.139139 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.139155 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.241501 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.241552 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.241561 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.241577 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.241586 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.245370 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.245407 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.245420 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.245439 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.245449 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: E0216 09:46:19.258788 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.263773 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.263801 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.263810 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.263828 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.263838 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.266416 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-l9dlr"] Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.266890 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:19 crc kubenswrapper[4814]: E0216 09:46:19.266952 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:19 crc kubenswrapper[4814]: E0216 09:46:19.278417 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.282864 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.282895 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.282904 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.282916 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.282925 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.283988 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: E0216 09:46:19.297011 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.300294 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.301031 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.301061 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.301077 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.301098 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.301114 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.317579 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: E0216 09:46:19.328695 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.333552 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.334121 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.334179 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.334193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.334214 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.334229 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.337015 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" event={"ID":"3f0a84b8-4c95-425c-ba79-884d3bc65ca2","Type":"ContainerStarted","Data":"d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.337334 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" event={"ID":"3f0a84b8-4c95-425c-ba79-884d3bc65ca2","Type":"ContainerStarted","Data":"c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.337589 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" event={"ID":"3f0a84b8-4c95-425c-ba79-884d3bc65ca2","Type":"ContainerStarted","Data":"eb90dda8b55ac04f91b430e5add3d4095b5e4f767714d05eb9ebe162af888c4a"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.338190 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.338266 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghwz\" (UniqueName: \"kubernetes.io/projected/83343376-433f-46da-b90f-9e1dd9270ea4-kube-api-access-bghwz\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.338689 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/1.log" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.343639 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.345654 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.346140 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:46:19 crc kubenswrapper[4814]: E0216 09:46:19.349451 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: E0216 09:46:19.349596 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.350886 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.353467 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.353504 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.353519 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.353558 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.353572 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.382233 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:15Z\\\",\\\"message\\\":\\\" 6090 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:15.383373 6090 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:15.383386 6090 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:15.383403 6090 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09:46:15.383461 6090 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 09:46:15.383470 6090 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 09:46:15.383506 6090 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:15.383549 6090 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:15.383556 6090 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:15.383557 6090 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:15.383569 6090 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:15.383576 6090 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:15.383588 6090 factory.go:656] Stopping watch factory\\\\nI0216 09:46:15.383610 6090 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:17Z\\\",\\\"message\\\":\\\" 6264 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.520499 6264 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 09:46:17.520672 6264 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.521066 6264 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:17.521110 6264 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:17.521137 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:17.521180 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:17.521664 6264 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:17.521688 6264 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:17.521712 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:17.521716 6264 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:17.521727 6264 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:17.521734 6264 factory.go:656] Stopping watch factory\\\\nI0216 09:46:17.521750 6264 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.395042 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.412602 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.426035 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.439265 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: E0216 09:46:19.439302 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:19 crc kubenswrapper[4814]: E0216 09:46:19.439489 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs podName:83343376-433f-46da-b90f-9e1dd9270ea4 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:19.939470501 +0000 UTC m=+37.632626671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs") pod "network-metrics-daemon-l9dlr" (UID: "83343376-433f-46da-b90f-9e1dd9270ea4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.438990 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.440141 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bghwz\" (UniqueName: \"kubernetes.io/projected/83343376-433f-46da-b90f-9e1dd9270ea4-kube-api-access-bghwz\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.454101 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.455799 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.455822 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.455830 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.455847 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.455858 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.462321 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bghwz\" (UniqueName: \"kubernetes.io/projected/83343376-433f-46da-b90f-9e1dd9270ea4-kube-api-access-bghwz\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.474197 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.492747 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.510423 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.527236 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.542405 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.557524 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.559830 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.559875 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.559886 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.559907 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.559918 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.575300 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.595170 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:15Z\\\",\\\"message\\\":\\\" 6090 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:15.383373 6090 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:15.383386 6090 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:15.383403 6090 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09:46:15.383461 6090 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 09:46:15.383470 6090 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 09:46:15.383506 6090 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:15.383549 6090 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:15.383556 6090 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:15.383557 6090 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:15.383569 6090 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:15.383576 6090 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:15.383588 6090 factory.go:656] Stopping watch factory\\\\nI0216 09:46:15.383610 6090 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:17Z\\\",\\\"message\\\":\\\" 6264 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.520499 6264 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 09:46:17.520672 6264 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.521066 6264 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:17.521110 6264 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:17.521137 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:17.521180 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:17.521664 6264 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:17.521688 6264 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:17.521712 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:17.521716 6264 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:17.521727 6264 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:17.521734 6264 factory.go:656] Stopping watch factory\\\\nI0216 09:46:17.521750 6264 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.608450 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.624383 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.636705 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.650322 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.662384 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.662451 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.662465 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.662484 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.662497 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.664422 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.674735 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.690646 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.705114 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.719328 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.732676 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.750389 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.764932 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.764976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.764987 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.765003 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.765014 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.768040 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.781649 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:19Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.866905 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.866967 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.866986 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.867007 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.867020 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.945260 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:19 crc kubenswrapper[4814]: E0216 09:46:19.945562 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:19 crc kubenswrapper[4814]: E0216 09:46:19.945677 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs podName:83343376-433f-46da-b90f-9e1dd9270ea4 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:20.945652139 +0000 UTC m=+38.638808409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs") pod "network-metrics-daemon-l9dlr" (UID: "83343376-433f-46da-b90f-9e1dd9270ea4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.969370 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.969405 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.969419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.969437 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.969448 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:19Z","lastTransitionTime":"2026-02-16T09:46:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:19 crc kubenswrapper[4814]: I0216 09:46:19.973589 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:01:21.987675616 +0000 UTC Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.072416 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.072458 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.072470 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.072490 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.072503 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:20Z","lastTransitionTime":"2026-02-16T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.177029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.177090 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.177104 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.177126 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.177141 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:20Z","lastTransitionTime":"2026-02-16T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.280427 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.280476 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.280488 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.280511 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.280523 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:20Z","lastTransitionTime":"2026-02-16T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.383575 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.383639 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.383657 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.383681 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.383697 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:20Z","lastTransitionTime":"2026-02-16T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.487452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.487567 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.487599 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.487625 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.487643 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:20Z","lastTransitionTime":"2026-02-16T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.591212 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.591274 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.591290 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.591316 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.591329 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:20Z","lastTransitionTime":"2026-02-16T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.698086 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.698159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.698182 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.698215 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.698239 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:20Z","lastTransitionTime":"2026-02-16T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.802778 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.802865 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.802879 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.802901 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.802921 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:20Z","lastTransitionTime":"2026-02-16T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.906583 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.906655 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.906670 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.906697 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.906720 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:20Z","lastTransitionTime":"2026-02-16T09:46:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.956909 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:20 crc kubenswrapper[4814]: E0216 09:46:20.957184 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:20 crc kubenswrapper[4814]: E0216 09:46:20.957337 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs podName:83343376-433f-46da-b90f-9e1dd9270ea4 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:22.957308405 +0000 UTC m=+40.650464615 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs") pod "network-metrics-daemon-l9dlr" (UID: "83343376-433f-46da-b90f-9e1dd9270ea4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.973770 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:20:56.151146394 +0000 UTC Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.993183 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.993277 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.993300 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:20 crc kubenswrapper[4814]: I0216 09:46:20.993338 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:20 crc kubenswrapper[4814]: E0216 09:46:20.993387 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:20 crc kubenswrapper[4814]: E0216 09:46:20.993472 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:20 crc kubenswrapper[4814]: E0216 09:46:20.993605 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:20 crc kubenswrapper[4814]: E0216 09:46:20.993740 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.009462 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.009583 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.009602 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.009625 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.009642 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:21Z","lastTransitionTime":"2026-02-16T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.113264 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.113336 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.113353 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.113381 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.113401 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:21Z","lastTransitionTime":"2026-02-16T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.216756 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.216820 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.216858 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.216895 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.216917 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:21Z","lastTransitionTime":"2026-02-16T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.320233 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.320300 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.320323 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.320350 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.320369 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:21Z","lastTransitionTime":"2026-02-16T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.423014 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.423089 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.423112 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.423143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.423165 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:21Z","lastTransitionTime":"2026-02-16T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.525968 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.526055 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.526082 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.526127 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.526154 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:21Z","lastTransitionTime":"2026-02-16T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.630207 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.630270 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.630290 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.630318 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.630336 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:21Z","lastTransitionTime":"2026-02-16T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.733344 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.733389 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.733402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.733424 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.733440 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:21Z","lastTransitionTime":"2026-02-16T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.836168 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.836251 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.836278 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.836309 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.836331 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:21Z","lastTransitionTime":"2026-02-16T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.940122 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.940194 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.940218 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.940253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.940275 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:21Z","lastTransitionTime":"2026-02-16T09:46:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:21 crc kubenswrapper[4814]: I0216 09:46:21.974848 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:05:06.891101515 +0000 UTC Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.043459 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.043913 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.044090 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.044299 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.044508 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:22Z","lastTransitionTime":"2026-02-16T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.147796 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.147885 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.147904 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.147929 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.147946 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:22Z","lastTransitionTime":"2026-02-16T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.252021 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.252473 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.252742 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.252900 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.253032 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:22Z","lastTransitionTime":"2026-02-16T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.357140 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.357205 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.357226 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.357254 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.357274 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:22Z","lastTransitionTime":"2026-02-16T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.461086 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.461144 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.461161 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.461187 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.461207 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:22Z","lastTransitionTime":"2026-02-16T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.564808 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.565259 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.565466 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.565765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.565972 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:22Z","lastTransitionTime":"2026-02-16T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.669008 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.669086 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.669114 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.669148 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.669173 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:22Z","lastTransitionTime":"2026-02-16T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.772206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.772265 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.772282 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.772311 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.772329 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:22Z","lastTransitionTime":"2026-02-16T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.876134 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.876378 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.876413 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.876450 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.876486 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:22Z","lastTransitionTime":"2026-02-16T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.975424 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:10:21.461738356 +0000 UTC Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.979992 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:22 crc kubenswrapper[4814]: E0216 09:46:22.980229 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.980257 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.980292 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.980304 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:22 crc kubenswrapper[4814]: E0216 09:46:22.980319 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs podName:83343376-433f-46da-b90f-9e1dd9270ea4 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:26.980296901 +0000 UTC m=+44.673453121 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs") pod "network-metrics-daemon-l9dlr" (UID: "83343376-433f-46da-b90f-9e1dd9270ea4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.980324 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.980354 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:22Z","lastTransitionTime":"2026-02-16T09:46:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.993045 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.993111 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.993169 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:22 crc kubenswrapper[4814]: E0216 09:46:22.993285 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:22 crc kubenswrapper[4814]: I0216 09:46:22.993368 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:22 crc kubenswrapper[4814]: E0216 09:46:22.993443 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:22 crc kubenswrapper[4814]: E0216 09:46:22.993715 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:22 crc kubenswrapper[4814]: E0216 09:46:22.993792 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.016647 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.053938 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.070515 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.083165 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.083236 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.083275 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.083308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.083331 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:23Z","lastTransitionTime":"2026-02-16T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.088064 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.108754 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.143320 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.162040 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.179818 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.186962 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.186996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.187005 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.187022 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.187032 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:23Z","lastTransitionTime":"2026-02-16T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.204893 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.225604 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.254235 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.272400 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.289901 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.289998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.290016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.290046 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.290065 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:23Z","lastTransitionTime":"2026-02-16T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.293601 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.314375 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.347015 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:15Z\\\",\\\"message\\\":\\\" 6090 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:15.383373 6090 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:15.383386 6090 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:15.383403 6090 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09:46:15.383461 6090 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 09:46:15.383470 6090 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 09:46:15.383506 6090 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:15.383549 6090 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:15.383556 6090 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:15.383557 6090 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:15.383569 6090 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:15.383576 6090 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:15.383588 6090 factory.go:656] Stopping watch factory\\\\nI0216 09:46:15.383610 6090 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:17Z\\\",\\\"message\\\":\\\" 6264 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.520499 6264 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 09:46:17.520672 6264 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.521066 6264 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:17.521110 6264 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:17.521137 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:17.521180 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:17.521664 6264 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:17.521688 6264 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:17.521712 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:17.521716 6264 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:17.521727 6264 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:17.521734 6264 factory.go:656] Stopping watch factory\\\\nI0216 09:46:17.521750 6264 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.362238 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.393246 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.393318 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.393345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.393376 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.393397 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:23Z","lastTransitionTime":"2026-02-16T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.496658 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.496723 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.496742 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.496766 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.496782 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:23Z","lastTransitionTime":"2026-02-16T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.599798 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.599890 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.599910 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.599940 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.599963 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:23Z","lastTransitionTime":"2026-02-16T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.703509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.703628 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.703657 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.703693 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.703722 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:23Z","lastTransitionTime":"2026-02-16T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.807354 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.807411 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.807427 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.807449 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.807465 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:23Z","lastTransitionTime":"2026-02-16T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.912934 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.913017 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.913051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.913079 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.913098 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:23Z","lastTransitionTime":"2026-02-16T09:46:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:23 crc kubenswrapper[4814]: I0216 09:46:23.976651 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:22:27.002380849 +0000 UTC Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.015989 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.016082 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.016107 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.016142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.016167 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:24Z","lastTransitionTime":"2026-02-16T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.122199 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.122252 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.122263 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.122283 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.122297 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:24Z","lastTransitionTime":"2026-02-16T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.225834 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.225917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.225940 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.225975 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.225999 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:24Z","lastTransitionTime":"2026-02-16T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.329857 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.329909 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.329920 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.329942 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.329954 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:24Z","lastTransitionTime":"2026-02-16T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.433802 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.433870 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.433890 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.433917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.433935 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:24Z","lastTransitionTime":"2026-02-16T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.537333 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.537403 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.537424 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.537452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.537470 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:24Z","lastTransitionTime":"2026-02-16T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.640640 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.640710 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.640727 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.640756 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.640774 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:24Z","lastTransitionTime":"2026-02-16T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.744150 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.744249 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.744272 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.744306 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.744327 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:24Z","lastTransitionTime":"2026-02-16T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.847193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.847268 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.847285 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.847307 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.847323 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:24Z","lastTransitionTime":"2026-02-16T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.951240 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.951303 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.951323 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.951352 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.951376 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:24Z","lastTransitionTime":"2026-02-16T09:46:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.977289 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:47:47.636901046 +0000 UTC Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.993387 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.993448 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.993448 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:24 crc kubenswrapper[4814]: I0216 09:46:24.993658 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:24 crc kubenswrapper[4814]: E0216 09:46:24.993652 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:24 crc kubenswrapper[4814]: E0216 09:46:24.993849 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:24 crc kubenswrapper[4814]: E0216 09:46:24.994024 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:24 crc kubenswrapper[4814]: E0216 09:46:24.994167 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.054520 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.054607 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.054620 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.054641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.054654 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:25Z","lastTransitionTime":"2026-02-16T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.157464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.157516 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.157553 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.157581 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.157644 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:25Z","lastTransitionTime":"2026-02-16T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.260302 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.260368 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.260394 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.260423 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.260446 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:25Z","lastTransitionTime":"2026-02-16T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.363792 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.363872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.363896 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.363927 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.363953 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:25Z","lastTransitionTime":"2026-02-16T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.467679 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.467748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.467772 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.467802 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.467824 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:25Z","lastTransitionTime":"2026-02-16T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.571512 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.571735 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.571755 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.571781 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.571802 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:25Z","lastTransitionTime":"2026-02-16T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.676110 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.676234 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.676264 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.676347 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.676417 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:25Z","lastTransitionTime":"2026-02-16T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.779672 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.779781 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.779802 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.779828 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.779898 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:25Z","lastTransitionTime":"2026-02-16T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.883242 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.883314 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.883332 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.883412 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.883437 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:25Z","lastTransitionTime":"2026-02-16T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.977665 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:21:03.888047988 +0000 UTC Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.987600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.987699 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.987718 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.987748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:25 crc kubenswrapper[4814]: I0216 09:46:25.987773 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:25Z","lastTransitionTime":"2026-02-16T09:46:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.091952 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.092010 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.092030 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.092055 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.092073 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:26Z","lastTransitionTime":"2026-02-16T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.195264 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.195341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.195363 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.195394 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.195416 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:26Z","lastTransitionTime":"2026-02-16T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.298353 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.298426 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.298452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.298490 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.298515 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:26Z","lastTransitionTime":"2026-02-16T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.402691 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.402746 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.402759 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.402781 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.402796 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:26Z","lastTransitionTime":"2026-02-16T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.507029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.507093 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.507111 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.507138 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.507156 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:26Z","lastTransitionTime":"2026-02-16T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.610906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.610983 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.611005 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.611036 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.611058 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:26Z","lastTransitionTime":"2026-02-16T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.713467 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.713581 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.713606 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.713638 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.713664 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:26Z","lastTransitionTime":"2026-02-16T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.817078 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.817168 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.817192 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.817242 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.817268 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:26Z","lastTransitionTime":"2026-02-16T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.921346 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.921409 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.921427 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.921452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.921470 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:26Z","lastTransitionTime":"2026-02-16T09:46:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.978300 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:50:18.331092903 +0000 UTC Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.993325 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.993403 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.993495 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:26 crc kubenswrapper[4814]: I0216 09:46:26.993622 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:26 crc kubenswrapper[4814]: E0216 09:46:26.993843 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:26 crc kubenswrapper[4814]: E0216 09:46:26.994004 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:26 crc kubenswrapper[4814]: E0216 09:46:26.994144 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:26 crc kubenswrapper[4814]: E0216 09:46:26.994275 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.024369 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.024435 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.024449 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.024476 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.024492 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:27Z","lastTransitionTime":"2026-02-16T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.037254 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:27 crc kubenswrapper[4814]: E0216 09:46:27.037519 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:27 crc kubenswrapper[4814]: E0216 09:46:27.037726 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs podName:83343376-433f-46da-b90f-9e1dd9270ea4 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:35.037693123 +0000 UTC m=+52.730849403 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs") pod "network-metrics-daemon-l9dlr" (UID: "83343376-433f-46da-b90f-9e1dd9270ea4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.127548 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.127598 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.127612 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.127632 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.127643 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:27Z","lastTransitionTime":"2026-02-16T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.230627 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.230686 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.230701 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.230753 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.230769 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:27Z","lastTransitionTime":"2026-02-16T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.334049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.334094 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.334106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.334127 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.334140 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:27Z","lastTransitionTime":"2026-02-16T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.436755 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.436824 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.436847 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.436881 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.436904 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:27Z","lastTransitionTime":"2026-02-16T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.540129 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.540193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.540206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.540226 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.540236 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:27Z","lastTransitionTime":"2026-02-16T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.643382 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.643461 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.643481 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.643510 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.643529 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:27Z","lastTransitionTime":"2026-02-16T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.746816 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.746861 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.746873 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.746892 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.746903 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:27Z","lastTransitionTime":"2026-02-16T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.849941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.849993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.850016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.850045 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.850070 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:27Z","lastTransitionTime":"2026-02-16T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.953857 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.953934 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.953959 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.953997 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.954022 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:27Z","lastTransitionTime":"2026-02-16T09:46:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:27 crc kubenswrapper[4814]: I0216 09:46:27.979422 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 20:20:08.129592152 +0000 UTC Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.057113 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.057197 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.057224 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.057250 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.057268 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:28Z","lastTransitionTime":"2026-02-16T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.160510 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.160577 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.160590 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.160613 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.160628 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:28Z","lastTransitionTime":"2026-02-16T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.263385 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.263470 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.263497 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.263530 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.263601 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:28Z","lastTransitionTime":"2026-02-16T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.366377 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.366500 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.366577 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.366613 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.366634 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:28Z","lastTransitionTime":"2026-02-16T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.469397 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.469455 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.469469 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.469490 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.469502 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:28Z","lastTransitionTime":"2026-02-16T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.572248 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.572309 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.572322 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.572345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.572364 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:28Z","lastTransitionTime":"2026-02-16T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.675928 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.675998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.676016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.676044 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.676063 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:28Z","lastTransitionTime":"2026-02-16T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.779325 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.779361 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.779376 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.779395 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.779447 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:28Z","lastTransitionTime":"2026-02-16T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.882509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.882682 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.882708 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.882733 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.882749 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:28Z","lastTransitionTime":"2026-02-16T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.979830 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:10:30.517564768 +0000 UTC Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.984975 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.985046 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.985070 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.985103 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.985127 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:28Z","lastTransitionTime":"2026-02-16T09:46:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.993237 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.993288 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.993297 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:28 crc kubenswrapper[4814]: E0216 09:46:28.993369 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:28 crc kubenswrapper[4814]: I0216 09:46:28.993378 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:28 crc kubenswrapper[4814]: E0216 09:46:28.993455 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:28 crc kubenswrapper[4814]: E0216 09:46:28.993572 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:28 crc kubenswrapper[4814]: E0216 09:46:28.993645 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.088406 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.088610 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.088700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.088732 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.088796 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.191456 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.191569 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.191627 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.191660 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.191686 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.295054 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.295129 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.295149 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.295177 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.295194 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.398092 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.398134 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.398143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.398162 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.398178 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.486188 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.486238 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.486251 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.486271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.486284 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: E0216 09:46:29.509124 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:29Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.514438 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.514476 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.514488 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.514509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.514521 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: E0216 09:46:29.531851 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:29Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.536625 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.536691 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.536710 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.536744 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.536761 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: E0216 09:46:29.556073 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:29Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.560613 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.560651 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.560661 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.560682 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.560694 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: E0216 09:46:29.578062 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:29Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.582470 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.582513 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.582526 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.582573 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.582589 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: E0216 09:46:29.598991 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:29Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:29 crc kubenswrapper[4814]: E0216 09:46:29.599191 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.600983 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.601065 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.601078 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.601104 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.601116 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.703874 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.703927 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.703938 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.703956 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.703967 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.806229 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.806271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.806280 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.806300 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.806312 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.908795 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.908838 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.908852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.908870 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.908881 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:29Z","lastTransitionTime":"2026-02-16T09:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:29 crc kubenswrapper[4814]: I0216 09:46:29.980076 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:53:17.491761515 +0000 UTC Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.012001 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.012083 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.012103 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.012129 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.012147 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:30Z","lastTransitionTime":"2026-02-16T09:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.115686 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.115718 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.115727 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.115743 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.115753 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:30Z","lastTransitionTime":"2026-02-16T09:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.217877 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.217916 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.217928 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.217946 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.217958 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:30Z","lastTransitionTime":"2026-02-16T09:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.321353 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.321418 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.321436 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.321465 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.321484 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:30Z","lastTransitionTime":"2026-02-16T09:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.424593 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.424649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.424663 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.424683 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.424696 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:30Z","lastTransitionTime":"2026-02-16T09:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.527383 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.527428 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.527453 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.527481 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.527502 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:30Z","lastTransitionTime":"2026-02-16T09:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.528146 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.537759 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.543050 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.560697 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.580555 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c67395863584a8826a25de0a8a69ca7557dc4596c77f70d9cf4fe73c01aee518\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:15Z\\\",\\\"message\\\":\\\" 6090 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:15.383373 6090 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:15.383386 6090 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:15.383403 6090 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:15.383441 6090 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09:46:15.383461 6090 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 09:46:15.383470 6090 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 09:46:15.383506 6090 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:15.383549 6090 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:15.383556 6090 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:15.383557 6090 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:15.383569 6090 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:15.383576 6090 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:15.383588 6090 factory.go:656] Stopping watch factory\\\\nI0216 09:46:15.383610 6090 ovnkube.go:599] Stopped ovnkube\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:17Z\\\",\\\"message\\\":\\\" 6264 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.520499 6264 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 09:46:17.520672 6264 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.521066 6264 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:17.521110 6264 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:17.521137 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:17.521180 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:17.521664 6264 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:17.521688 6264 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:17.521712 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:17.521716 6264 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:17.521727 6264 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:17.521734 6264 factory.go:656] Stopping watch factory\\\\nI0216 09:46:17.521750 6264 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.593254 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.606480 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.622599 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.629993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.630090 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.630113 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.630148 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.630176 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:30Z","lastTransitionTime":"2026-02-16T09:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.635818 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.652974 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.671218 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.686878 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.707988 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.720800 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.734003 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.734062 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.734079 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.734104 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.734122 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:30Z","lastTransitionTime":"2026-02-16T09:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.738768 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.757289 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.776226 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.791299 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:30Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.837280 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.837353 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.837377 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.837411 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.837439 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:30Z","lastTransitionTime":"2026-02-16T09:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.940476 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.940609 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.940636 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.940667 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.940686 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:30Z","lastTransitionTime":"2026-02-16T09:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.981276 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:17:37.819592094 +0000 UTC Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.992989 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.993097 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.993207 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:30 crc kubenswrapper[4814]: E0216 09:46:30.993444 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:30 crc kubenswrapper[4814]: I0216 09:46:30.993518 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:30 crc kubenswrapper[4814]: E0216 09:46:30.993690 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:30 crc kubenswrapper[4814]: E0216 09:46:30.993819 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:30 crc kubenswrapper[4814]: E0216 09:46:30.994060 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.044439 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.044498 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.044510 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.044530 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.044565 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:31Z","lastTransitionTime":"2026-02-16T09:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.147929 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.147985 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.148003 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.148031 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.148050 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:31Z","lastTransitionTime":"2026-02-16T09:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.251965 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.252029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.252051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.252080 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.252102 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:31Z","lastTransitionTime":"2026-02-16T09:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.275031 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.276676 4814 scope.go:117] "RemoveContainer" containerID="17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.296385 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.326482 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.345851 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.355730 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.355821 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.355847 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.355881 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.355908 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:31Z","lastTransitionTime":"2026-02-16T09:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.369186 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.388030 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.406436 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.422316 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.445406 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:17Z\\\",\\\"message\\\":\\\" 6264 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.520499 6264 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 09:46:17.520672 6264 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.521066 6264 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:17.521110 6264 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:17.521137 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:17.521180 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:17.521664 6264 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:17.521688 6264 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:17.521712 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:17.521716 6264 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:17.521727 6264 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:17.521734 6264 factory.go:656] Stopping watch factory\\\\nI0216 09:46:17.521750 6264 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.458964 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.458996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.459010 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.459032 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.459047 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:31Z","lastTransitionTime":"2026-02-16T09:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.462879 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.480445 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.494209 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.509045 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.524836 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.540025 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.552168 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.570009 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.587875 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:31Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.597723 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.597784 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.597805 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.597838 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.597862 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:31Z","lastTransitionTime":"2026-02-16T09:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.700641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.700684 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.700708 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.700737 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.700754 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:31Z","lastTransitionTime":"2026-02-16T09:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.805521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.805592 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.805604 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.805622 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.805633 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:31Z","lastTransitionTime":"2026-02-16T09:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.907733 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.907765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.907773 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.907788 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.907796 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:31Z","lastTransitionTime":"2026-02-16T09:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:31 crc kubenswrapper[4814]: I0216 09:46:31.981980 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:44:41.498320126 +0000 UTC Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.010351 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.010390 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.010401 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.010418 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.010429 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:32Z","lastTransitionTime":"2026-02-16T09:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.113162 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.113215 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.113228 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.113253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.113269 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:32Z","lastTransitionTime":"2026-02-16T09:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.216259 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.216309 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.216323 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.216345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.216359 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:32Z","lastTransitionTime":"2026-02-16T09:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.244478 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.268024 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.282417 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.297882 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.311437 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.319171 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.319225 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.319237 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.319258 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.319270 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:32Z","lastTransitionTime":"2026-02-16T09:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.326495 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.341263 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.356509 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.371506 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.386661 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.400174 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.402133 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/2.log" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.402749 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/1.log" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.405775 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc" exitCode=1 Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.405807 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc"} Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.405877 4814 scope.go:117] "RemoveContainer" containerID="17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.407074 4814 scope.go:117] "RemoveContainer" containerID="dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc" Feb 16 09:46:32 crc kubenswrapper[4814]: E0216 09:46:32.409092 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.421287 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:17Z\\\",\\\"message\\\":\\\" 6264 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.520499 6264 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 09:46:17.520672 6264 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.521066 6264 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:17.521110 6264 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:17.521137 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:17.521180 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:17.521664 6264 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:17.521688 6264 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:17.521712 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:17.521716 6264 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:17.521727 6264 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:17.521734 6264 factory.go:656] Stopping watch factory\\\\nI0216 09:46:17.521750 6264 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.421656 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.421686 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.421701 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.421719 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.421730 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:32Z","lastTransitionTime":"2026-02-16T09:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.438567 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.453830 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.467358 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.480570 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.492737 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.503928 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.519702 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.524637 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.524676 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.524689 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.524712 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.524725 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:32Z","lastTransitionTime":"2026-02-16T09:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.532698 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.544478 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.555180 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.563789 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.576708 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.593776 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.610198 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.623912 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.627484 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.627523 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.627556 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.627579 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.627590 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:32Z","lastTransitionTime":"2026-02-16T09:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.638918 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.652684 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.669182 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.681046 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.695324 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.708310 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.730811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.731247 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.731260 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.731339 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.731358 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:32Z","lastTransitionTime":"2026-02-16T09:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.734152 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:17Z\\\",\\\"message\\\":\\\" 6264 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.520499 6264 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 09:46:17.520672 6264 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.521066 6264 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:17.521110 6264 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:17.521137 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:17.521180 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:17.521664 6264 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:17.521688 6264 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:17.521712 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:17.521716 6264 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:17.521727 6264 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:17.521734 6264 factory.go:656] Stopping watch factory\\\\nI0216 09:46:17.521750 6264 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"message\\\":\\\"2.173493 6489 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:32.173600 6489 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:32.173612 6489 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:32.173621 6489 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:32.173631 6489 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:32.173655 6489 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:32.173659 6489 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:32.173626 6489 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:32.173693 6489 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:32.173708 6489 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:32.173735 6489 factory.go:656] Stopping watch factory\\\\nI0216 09:46:32.173755 6489 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:46:32.173775 6489 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:32.173780 6489 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.750597 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:32Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.834984 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.835303 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.835367 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.835450 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.835507 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:32Z","lastTransitionTime":"2026-02-16T09:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.938985 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.939053 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.939071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.939098 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.939116 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:32Z","lastTransitionTime":"2026-02-16T09:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.982876 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:31:03.210383937 +0000 UTC Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.993364 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.993408 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.993383 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:32 crc kubenswrapper[4814]: E0216 09:46:32.993642 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:32 crc kubenswrapper[4814]: I0216 09:46:32.993703 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:32 crc kubenswrapper[4814]: E0216 09:46:32.993848 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:32 crc kubenswrapper[4814]: E0216 09:46:32.993930 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:32 crc kubenswrapper[4814]: E0216 09:46:32.994073 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.021680 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:17Z\\\",\\\"message\\\":\\\" 6264 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.520499 6264 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 09:46:17.520672 6264 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.521066 6264 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:17.521110 6264 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:17.521137 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:17.521180 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:17.521664 6264 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:17.521688 6264 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:17.521712 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:17.521716 6264 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:17.521727 6264 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:17.521734 6264 factory.go:656] Stopping watch factory\\\\nI0216 09:46:17.521750 6264 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"message\\\":\\\"2.173493 6489 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:32.173600 6489 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:32.173612 6489 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:32.173621 6489 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:32.173631 6489 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:32.173655 6489 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:32.173659 6489 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:32.173626 6489 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:32.173693 6489 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:32.173708 6489 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:32.173735 6489 factory.go:656] Stopping watch factory\\\\nI0216 09:46:32.173755 6489 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:46:32.173775 6489 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:32.173780 6489 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.035667 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.041914 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.041940 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.041949 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.041967 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.041976 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:33Z","lastTransitionTime":"2026-02-16T09:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.057463 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.079310 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.100040 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.124442 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.139052 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.144872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.144917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.144927 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.144945 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.144957 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:33Z","lastTransitionTime":"2026-02-16T09:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.152597 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.167218 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.188132 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.207998 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.222002 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.240848 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.247373 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.247430 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.247447 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.247471 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.247486 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:33Z","lastTransitionTime":"2026-02-16T09:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.261231 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.276848 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.291187 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.305261 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:33Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.350138 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.350186 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.350197 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.350217 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.350229 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:33Z","lastTransitionTime":"2026-02-16T09:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.410690 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/2.log" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.453567 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.453622 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.453635 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.453652 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.453661 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:33Z","lastTransitionTime":"2026-02-16T09:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.555945 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.556006 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.556022 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.556048 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.556067 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:33Z","lastTransitionTime":"2026-02-16T09:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.659228 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.659302 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.659326 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.659356 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.659373 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:33Z","lastTransitionTime":"2026-02-16T09:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.762763 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.762811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.762826 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.762844 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.762859 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:33Z","lastTransitionTime":"2026-02-16T09:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.866159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.866224 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.866242 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.866267 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.866285 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:33Z","lastTransitionTime":"2026-02-16T09:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.969227 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.969280 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.969292 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.969313 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.969324 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:33Z","lastTransitionTime":"2026-02-16T09:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:33 crc kubenswrapper[4814]: I0216 09:46:33.983778 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:22:34.95400173 +0000 UTC Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.071591 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.071656 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.071678 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.071707 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.071725 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:34Z","lastTransitionTime":"2026-02-16T09:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.175489 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.175891 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.175904 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.175929 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.175941 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:34Z","lastTransitionTime":"2026-02-16T09:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.278972 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.279059 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.279092 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.279122 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.279142 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:34Z","lastTransitionTime":"2026-02-16T09:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.382390 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.382448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.382464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.382492 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.382513 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:34Z","lastTransitionTime":"2026-02-16T09:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.485714 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.485830 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.485846 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.485868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.485881 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:34Z","lastTransitionTime":"2026-02-16T09:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.596964 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.597074 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.597106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.597140 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.597161 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:34Z","lastTransitionTime":"2026-02-16T09:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.699878 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.699939 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.699959 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.699986 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.700004 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:34Z","lastTransitionTime":"2026-02-16T09:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.732840 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.733035 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:47:06.733007696 +0000 UTC m=+84.426163886 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.733120 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.733169 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.733294 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.733347 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:47:06.733335885 +0000 UTC m=+84.426492075 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.733295 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.733438 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:47:06.733423277 +0000 UTC m=+84.426579477 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.803498 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.803592 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.803610 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.803631 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.803643 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:34Z","lastTransitionTime":"2026-02-16T09:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.834935 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.835074 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.835225 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.835280 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.835283 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.835297 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.835313 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.835335 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.835387 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 09:47:06.835365095 +0000 UTC m=+84.528521455 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.835422 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 09:47:06.835398236 +0000 UTC m=+84.528554456 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.906439 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.906505 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.906521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.906659 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.906682 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:34Z","lastTransitionTime":"2026-02-16T09:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.984211 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:14:53.297679637 +0000 UTC Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.992739 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.992912 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.993009 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.993091 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.993176 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.993255 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:34 crc kubenswrapper[4814]: I0216 09:46:34.993657 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:34 crc kubenswrapper[4814]: E0216 09:46:34.993758 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.009595 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.009677 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.009706 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.009747 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.009772 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:35Z","lastTransitionTime":"2026-02-16T09:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.038253 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:35 crc kubenswrapper[4814]: E0216 09:46:35.038481 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:35 crc kubenswrapper[4814]: E0216 09:46:35.038647 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs podName:83343376-433f-46da-b90f-9e1dd9270ea4 nodeName:}" failed. No retries permitted until 2026-02-16 09:46:51.038616585 +0000 UTC m=+68.731772765 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs") pod "network-metrics-daemon-l9dlr" (UID: "83343376-433f-46da-b90f-9e1dd9270ea4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.112948 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.112999 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.113013 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.113035 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.113050 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:35Z","lastTransitionTime":"2026-02-16T09:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.215814 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.215897 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.215917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.215944 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.215960 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:35Z","lastTransitionTime":"2026-02-16T09:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.319996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.320055 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.320068 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.320089 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.320103 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:35Z","lastTransitionTime":"2026-02-16T09:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.422267 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.422334 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.422353 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.422378 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.422396 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:35Z","lastTransitionTime":"2026-02-16T09:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.525288 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.525332 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.525343 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.525360 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.525371 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:35Z","lastTransitionTime":"2026-02-16T09:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.628160 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.628203 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.628214 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.628235 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.628248 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:35Z","lastTransitionTime":"2026-02-16T09:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.731180 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.731244 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.731254 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.731271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.731281 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:35Z","lastTransitionTime":"2026-02-16T09:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.834841 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.834917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.834935 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.834965 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.834988 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:35Z","lastTransitionTime":"2026-02-16T09:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.948191 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.948290 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.948309 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.948576 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.949743 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:35Z","lastTransitionTime":"2026-02-16T09:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:35 crc kubenswrapper[4814]: I0216 09:46:35.984877 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:47:31.959819932 +0000 UTC Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.052089 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.052123 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.052135 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.052151 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.052164 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:36Z","lastTransitionTime":"2026-02-16T09:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.155140 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.155210 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.155232 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.155257 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.155275 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:36Z","lastTransitionTime":"2026-02-16T09:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.258924 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.258986 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.259010 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.259040 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.259063 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:36Z","lastTransitionTime":"2026-02-16T09:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.361728 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.361815 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.361842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.361873 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.361891 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:36Z","lastTransitionTime":"2026-02-16T09:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.465201 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.465266 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.465283 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.465348 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.465369 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:36Z","lastTransitionTime":"2026-02-16T09:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.568730 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.568818 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.568842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.568877 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.568901 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:36Z","lastTransitionTime":"2026-02-16T09:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.671937 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.672017 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.672042 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.672073 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.672099 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:36Z","lastTransitionTime":"2026-02-16T09:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.775448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.775507 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.775524 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.775577 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.775601 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:36Z","lastTransitionTime":"2026-02-16T09:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.878646 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.878698 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.878717 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.878746 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.878767 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:36Z","lastTransitionTime":"2026-02-16T09:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.982195 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.982268 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.982299 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.982337 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.982363 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:36Z","lastTransitionTime":"2026-02-16T09:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.985714 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:26:01.097232587 +0000 UTC Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.993438 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.993585 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.993912 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:36 crc kubenswrapper[4814]: E0216 09:46:36.994188 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:36 crc kubenswrapper[4814]: E0216 09:46:36.994380 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:36 crc kubenswrapper[4814]: E0216 09:46:36.994759 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:36 crc kubenswrapper[4814]: I0216 09:46:36.994226 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:36 crc kubenswrapper[4814]: E0216 09:46:36.995247 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.084965 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.085044 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.085059 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.085084 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.085098 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:37Z","lastTransitionTime":"2026-02-16T09:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.188557 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.188615 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.188627 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.188662 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.188679 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:37Z","lastTransitionTime":"2026-02-16T09:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.292825 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.292918 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.292941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.292977 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.293002 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:37Z","lastTransitionTime":"2026-02-16T09:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.395764 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.395829 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.395842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.395864 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.395882 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:37Z","lastTransitionTime":"2026-02-16T09:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.499653 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.499712 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.499730 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.499755 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.499771 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:37Z","lastTransitionTime":"2026-02-16T09:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.602444 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.602623 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.602649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.602678 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.602699 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:37Z","lastTransitionTime":"2026-02-16T09:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.706321 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.706419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.706445 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.706473 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.706490 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:37Z","lastTransitionTime":"2026-02-16T09:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.809796 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.809864 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.809882 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.809908 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.809925 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:37Z","lastTransitionTime":"2026-02-16T09:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.913269 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.913343 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.913369 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.913419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.913478 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:37Z","lastTransitionTime":"2026-02-16T09:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:37 crc kubenswrapper[4814]: I0216 09:46:37.985886 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:21:39.896043343 +0000 UTC Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.015911 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.015978 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.015996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.016017 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.016035 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:38Z","lastTransitionTime":"2026-02-16T09:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.119461 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.119727 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.119762 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.119844 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.119918 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:38Z","lastTransitionTime":"2026-02-16T09:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.222416 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.222483 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.222502 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.222529 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.222613 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:38Z","lastTransitionTime":"2026-02-16T09:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.333037 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.333143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.333168 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.333204 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.333228 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:38Z","lastTransitionTime":"2026-02-16T09:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.472422 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.472520 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.472588 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.472619 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.472640 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:38Z","lastTransitionTime":"2026-02-16T09:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.576002 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.576053 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.576064 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.576085 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.576101 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:38Z","lastTransitionTime":"2026-02-16T09:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.680700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.680789 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.680814 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.680854 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.680882 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:38Z","lastTransitionTime":"2026-02-16T09:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.784723 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.784777 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.784790 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.784811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.784825 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:38Z","lastTransitionTime":"2026-02-16T09:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.888154 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.888237 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.888261 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.888291 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.888309 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:38Z","lastTransitionTime":"2026-02-16T09:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.986659 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:44:14.813193723 +0000 UTC Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.991702 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.991760 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.991780 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.991806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.991827 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:38Z","lastTransitionTime":"2026-02-16T09:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.992589 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.992635 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:38 crc kubenswrapper[4814]: E0216 09:46:38.992914 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.992710 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:38 crc kubenswrapper[4814]: E0216 09:46:38.993425 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:38 crc kubenswrapper[4814]: I0216 09:46:38.992635 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:38 crc kubenswrapper[4814]: E0216 09:46:38.993881 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:38 crc kubenswrapper[4814]: E0216 09:46:38.993065 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.097202 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.097260 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.097278 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.097306 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.097324 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.200949 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.201006 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.201023 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.201049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.201066 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.304654 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.305228 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.305409 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.305691 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.305913 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.408764 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.408830 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.408849 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.408878 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.408897 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.511929 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.512232 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.512298 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.512391 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.512462 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.615669 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.615747 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.615768 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.615799 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.615821 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.694725 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.694789 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.694802 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.694824 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.694841 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: E0216 09:46:39.715671 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:39Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.722592 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.722632 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.722641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.722659 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.722673 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: E0216 09:46:39.743238 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:39Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.747646 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.747874 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.748027 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.748183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.748357 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: E0216 09:46:39.768641 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:39Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.773644 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.773708 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.773732 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.773765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.773796 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: E0216 09:46:39.791890 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:39Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.801240 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.801312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.801324 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.801341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.801351 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: E0216 09:46:39.822293 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:39Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:39 crc kubenswrapper[4814]: E0216 09:46:39.822431 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.823932 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.823973 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.823987 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.824005 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.824017 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.927026 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.927099 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.927123 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.927156 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.927181 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:39Z","lastTransitionTime":"2026-02-16T09:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:39 crc kubenswrapper[4814]: I0216 09:46:39.987765 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:02:19.4364617 +0000 UTC Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.030896 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.030961 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.030982 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.031011 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.031030 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:40Z","lastTransitionTime":"2026-02-16T09:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.135746 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.135841 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.135868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.135903 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.135927 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:40Z","lastTransitionTime":"2026-02-16T09:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.239283 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.239859 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.240048 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.240212 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.240350 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:40Z","lastTransitionTime":"2026-02-16T09:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.344054 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.344133 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.344160 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.344222 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.344248 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:40Z","lastTransitionTime":"2026-02-16T09:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.447795 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.447889 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.447920 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.447956 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.447982 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:40Z","lastTransitionTime":"2026-02-16T09:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.551298 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.551371 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.551389 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.551414 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.551431 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:40Z","lastTransitionTime":"2026-02-16T09:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.654869 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.654934 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.654953 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.654980 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.654999 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:40Z","lastTransitionTime":"2026-02-16T09:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.758634 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.758702 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.758729 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.758765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.758791 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:40Z","lastTransitionTime":"2026-02-16T09:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.862138 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.862223 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.862246 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.862278 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.862302 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:40Z","lastTransitionTime":"2026-02-16T09:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.966291 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.966403 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.966421 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.966460 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.966483 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:40Z","lastTransitionTime":"2026-02-16T09:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.988769 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:05:24.591164158 +0000 UTC Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.993271 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.993339 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.993359 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:40 crc kubenswrapper[4814]: E0216 09:46:40.993503 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:40 crc kubenswrapper[4814]: I0216 09:46:40.993645 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:40 crc kubenswrapper[4814]: E0216 09:46:40.993722 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:40 crc kubenswrapper[4814]: E0216 09:46:40.993994 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:40 crc kubenswrapper[4814]: E0216 09:46:40.994163 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.070395 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.070479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.070504 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.070572 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.070599 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:41Z","lastTransitionTime":"2026-02-16T09:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.174841 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.174926 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.174944 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.174976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.174996 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:41Z","lastTransitionTime":"2026-02-16T09:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.278463 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.278522 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.278566 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.278597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.278613 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:41Z","lastTransitionTime":"2026-02-16T09:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.382233 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.382313 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.382331 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.382357 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.382379 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:41Z","lastTransitionTime":"2026-02-16T09:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.485801 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.485958 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.485988 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.486032 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.486057 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:41Z","lastTransitionTime":"2026-02-16T09:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.589893 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.589956 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.589975 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.590002 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.590023 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:41Z","lastTransitionTime":"2026-02-16T09:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.693015 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.693462 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.693756 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.693952 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.694167 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:41Z","lastTransitionTime":"2026-02-16T09:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.798808 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.798888 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.798911 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.798942 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.798962 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:41Z","lastTransitionTime":"2026-02-16T09:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.903307 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.903362 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.903377 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.903400 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.903416 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:41Z","lastTransitionTime":"2026-02-16T09:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:41 crc kubenswrapper[4814]: I0216 09:46:41.989514 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:46:15.628858739 +0000 UTC Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.006771 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.006840 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.006861 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.006887 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.006903 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:42Z","lastTransitionTime":"2026-02-16T09:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.110419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.110500 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.110526 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.110600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.110625 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:42Z","lastTransitionTime":"2026-02-16T09:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.214622 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.214698 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.214716 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.214743 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.214762 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:42Z","lastTransitionTime":"2026-02-16T09:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.318322 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.318386 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.318404 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.318430 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.318448 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:42Z","lastTransitionTime":"2026-02-16T09:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.422573 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.422645 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.422664 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.422694 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.422712 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:42Z","lastTransitionTime":"2026-02-16T09:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.525696 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.525771 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.525790 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.525819 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.525840 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:42Z","lastTransitionTime":"2026-02-16T09:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.629659 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.629757 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.629788 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.629825 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.629850 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:42Z","lastTransitionTime":"2026-02-16T09:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.732932 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.732988 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.733008 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.733033 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.733052 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:42Z","lastTransitionTime":"2026-02-16T09:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.836290 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.836355 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.836375 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.836402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.836422 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:42Z","lastTransitionTime":"2026-02-16T09:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.940303 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.940392 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.940415 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.940447 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.940470 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:42Z","lastTransitionTime":"2026-02-16T09:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.989975 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:11:50.268036418 +0000 UTC Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.993454 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:42 crc kubenswrapper[4814]: E0216 09:46:42.993867 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.993921 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.993940 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:42 crc kubenswrapper[4814]: E0216 09:46:42.994093 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:42 crc kubenswrapper[4814]: I0216 09:46:42.993934 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:42 crc kubenswrapper[4814]: E0216 09:46:42.994191 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:42 crc kubenswrapper[4814]: E0216 09:46:42.994261 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.021665 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.042435 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.043425 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.043456 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.043468 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.043490 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.043502 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:43Z","lastTransitionTime":"2026-02-16T09:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.063513 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.079828 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.101348 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.117376 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.141890 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://17efb66f107b86b22aea68afea7291b404d79a17a9176232936cd64db819724a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:17Z\\\",\\\"message\\\":\\\" 6264 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.520499 6264 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 09:46:17.520672 6264 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0216 09:46:17.521066 6264 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 09:46:17.521110 6264 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:17.521137 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:17.521180 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:17.521664 6264 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:17.521688 6264 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:17.521712 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 09:46:17.521716 6264 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:17.521727 6264 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:17.521734 6264 factory.go:656] Stopping watch factory\\\\nI0216 09:46:17.521750 6264 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"message\\\":\\\"2.173493 6489 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:32.173600 6489 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:32.173612 6489 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:32.173621 6489 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:32.173631 6489 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:32.173655 6489 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:32.173659 6489 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:32.173626 6489 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:32.173693 6489 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:32.173708 6489 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:32.173735 6489 factory.go:656] Stopping watch factory\\\\nI0216 09:46:32.173755 6489 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:46:32.173775 6489 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:32.173780 6489 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.149329 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.149400 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.149420 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.149448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.149470 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:43Z","lastTransitionTime":"2026-02-16T09:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.158147 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.175419 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.189247 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.204968 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.220099 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.236423 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.249684 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.252317 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.252341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.252350 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.252366 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.252376 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:43Z","lastTransitionTime":"2026-02-16T09:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.270176 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.291807 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.310132 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:43Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.354845 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.354881 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.354894 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.354911 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.354921 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:43Z","lastTransitionTime":"2026-02-16T09:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.457754 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.457792 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.457801 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.457817 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.457827 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:43Z","lastTransitionTime":"2026-02-16T09:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.561392 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.561501 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.561527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.561628 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.561653 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:43Z","lastTransitionTime":"2026-02-16T09:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.664585 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.664654 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.664680 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.664709 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.664733 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:43Z","lastTransitionTime":"2026-02-16T09:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.768182 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.768577 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.768594 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.768617 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.768634 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:43Z","lastTransitionTime":"2026-02-16T09:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.872030 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.872084 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.872108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.872140 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.872162 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:43Z","lastTransitionTime":"2026-02-16T09:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.975041 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.975112 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.975134 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.975167 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.975189 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:43Z","lastTransitionTime":"2026-02-16T09:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:43 crc kubenswrapper[4814]: I0216 09:46:43.990782 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:46:08.413697191 +0000 UTC Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.078155 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.078306 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.078330 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.078361 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.078383 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:44Z","lastTransitionTime":"2026-02-16T09:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.181185 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.181232 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.181246 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.181266 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.181279 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:44Z","lastTransitionTime":"2026-02-16T09:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.284334 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.284402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.284441 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.284476 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.284496 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:44Z","lastTransitionTime":"2026-02-16T09:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.388529 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.388640 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.388658 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.388686 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.388705 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:44Z","lastTransitionTime":"2026-02-16T09:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.491396 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.491464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.491483 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.491509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.491529 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:44Z","lastTransitionTime":"2026-02-16T09:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.595264 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.595339 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.595362 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.595398 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.595427 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:44Z","lastTransitionTime":"2026-02-16T09:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.699931 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.700007 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.700026 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.700056 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.700075 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:44Z","lastTransitionTime":"2026-02-16T09:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.803610 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.803661 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.803678 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.803702 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.803718 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:44Z","lastTransitionTime":"2026-02-16T09:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.907106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.907268 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.907295 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.907333 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.907355 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:44Z","lastTransitionTime":"2026-02-16T09:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.991803 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:18:18.368760424 +0000 UTC Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.993126 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:44 crc kubenswrapper[4814]: E0216 09:46:44.993296 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.993683 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:44 crc kubenswrapper[4814]: E0216 09:46:44.993777 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.993887 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:44 crc kubenswrapper[4814]: I0216 09:46:44.993910 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:44 crc kubenswrapper[4814]: E0216 09:46:44.993953 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:44 crc kubenswrapper[4814]: E0216 09:46:44.994112 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.010640 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.010698 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.010713 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.010736 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.010752 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:45Z","lastTransitionTime":"2026-02-16T09:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.114345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.114421 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.114448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.114478 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.114502 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:45Z","lastTransitionTime":"2026-02-16T09:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.217102 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.217159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.217176 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.217201 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.217217 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:45Z","lastTransitionTime":"2026-02-16T09:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.321768 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.321843 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.321857 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.321901 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.321913 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:45Z","lastTransitionTime":"2026-02-16T09:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.426477 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.426578 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.426598 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.426623 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.426677 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:45Z","lastTransitionTime":"2026-02-16T09:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.530063 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.530106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.530117 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.530134 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.530146 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:45Z","lastTransitionTime":"2026-02-16T09:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.632163 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.632198 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.632208 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.632224 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.632233 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:45Z","lastTransitionTime":"2026-02-16T09:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.734780 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.734856 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.734880 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.734917 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.734939 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:45Z","lastTransitionTime":"2026-02-16T09:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.838716 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.838782 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.838799 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.838830 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.838848 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:45Z","lastTransitionTime":"2026-02-16T09:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.942747 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.942816 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.942838 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.942871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.942899 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:45Z","lastTransitionTime":"2026-02-16T09:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:45 crc kubenswrapper[4814]: I0216 09:46:45.992410 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 10:30:54.443376565 +0000 UTC Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.046049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.046099 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.046115 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.046138 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.046157 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:46Z","lastTransitionTime":"2026-02-16T09:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.149936 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.150011 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.150035 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.150065 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.150087 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:46Z","lastTransitionTime":"2026-02-16T09:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.253304 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.253374 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.253393 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.253410 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.253422 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:46Z","lastTransitionTime":"2026-02-16T09:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.357343 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.357422 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.357457 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.357493 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.357516 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:46Z","lastTransitionTime":"2026-02-16T09:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.460801 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.460875 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.460889 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.460938 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.460961 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:46Z","lastTransitionTime":"2026-02-16T09:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.564430 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.564483 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.564492 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.564510 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.564523 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:46Z","lastTransitionTime":"2026-02-16T09:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.667838 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.667903 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.667922 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.667951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.667970 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:46Z","lastTransitionTime":"2026-02-16T09:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.770317 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.770379 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.770395 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.770415 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.770426 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:46Z","lastTransitionTime":"2026-02-16T09:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.878119 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.878174 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.878187 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.878205 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.878217 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:46Z","lastTransitionTime":"2026-02-16T09:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.980876 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.980937 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.980947 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.980965 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.980977 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:46Z","lastTransitionTime":"2026-02-16T09:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.993701 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.993680 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:23:19.422995865 +0000 UTC Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.993700 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.993854 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:46 crc kubenswrapper[4814]: E0216 09:46:46.993953 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:46 crc kubenswrapper[4814]: E0216 09:46:46.994067 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:46 crc kubenswrapper[4814]: E0216 09:46:46.994148 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.994285 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:46 crc kubenswrapper[4814]: E0216 09:46:46.994604 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:46 crc kubenswrapper[4814]: I0216 09:46:46.994726 4814 scope.go:117] "RemoveContainer" containerID="dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc" Feb 16 09:46:46 crc kubenswrapper[4814]: E0216 09:46:46.994968 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.025761 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.045312 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.074296 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"message\\\":\\\"2.173493 6489 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:32.173600 6489 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:32.173612 6489 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:32.173621 6489 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:32.173631 6489 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:32.173655 6489 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:32.173659 6489 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:32.173626 6489 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:32.173693 6489 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:32.173708 6489 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:32.173735 6489 factory.go:656] Stopping watch factory\\\\nI0216 09:46:32.173755 6489 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:46:32.173775 6489 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:32.173780 6489 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.084030 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.084099 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.084114 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.084136 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.084151 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:47Z","lastTransitionTime":"2026-02-16T09:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.089430 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.103037 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.115191 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.131787 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.147417 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.166109 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.187505 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.187559 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.187571 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.187588 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.187600 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:47Z","lastTransitionTime":"2026-02-16T09:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.189437 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.206488 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.222713 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.235447 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.253654 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.271376 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.291187 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.291237 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.291250 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.291268 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.291280 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:47Z","lastTransitionTime":"2026-02-16T09:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.295991 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.313652 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:47Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.393749 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.393823 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.393843 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.393866 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.393881 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:47Z","lastTransitionTime":"2026-02-16T09:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.496464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.496582 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.496599 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.496625 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.496641 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:47Z","lastTransitionTime":"2026-02-16T09:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.599389 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.599447 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.599463 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.599489 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.599507 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:47Z","lastTransitionTime":"2026-02-16T09:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.703293 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.703362 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.703387 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.703420 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.703442 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:47Z","lastTransitionTime":"2026-02-16T09:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.806269 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.806333 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.806351 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.806378 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.806394 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:47Z","lastTransitionTime":"2026-02-16T09:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.908562 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.908607 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.908618 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.908636 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.908648 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:47Z","lastTransitionTime":"2026-02-16T09:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:47 crc kubenswrapper[4814]: I0216 09:46:47.994814 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:35:34.300493102 +0000 UTC Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.011769 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.011840 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.011862 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.011894 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.011920 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:48Z","lastTransitionTime":"2026-02-16T09:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.116080 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.116187 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.116214 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.116298 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.116326 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:48Z","lastTransitionTime":"2026-02-16T09:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.219282 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.219329 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.219338 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.219359 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.219369 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:48Z","lastTransitionTime":"2026-02-16T09:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.322465 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.322570 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.322605 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.322635 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.322658 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:48Z","lastTransitionTime":"2026-02-16T09:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.426419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.426485 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.426511 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.426610 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.426641 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:48Z","lastTransitionTime":"2026-02-16T09:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.530270 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.530332 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.530345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.530366 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.530380 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:48Z","lastTransitionTime":"2026-02-16T09:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.633415 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.633471 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.633480 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.633499 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.633512 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:48Z","lastTransitionTime":"2026-02-16T09:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.736134 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.736194 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.736208 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.736229 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.736245 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:48Z","lastTransitionTime":"2026-02-16T09:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.838860 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.838926 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.838944 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.838972 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.838991 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:48Z","lastTransitionTime":"2026-02-16T09:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.941590 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.941640 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.941653 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.941670 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.941681 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:48Z","lastTransitionTime":"2026-02-16T09:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.993272 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.993364 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.993367 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.993426 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:48 crc kubenswrapper[4814]: E0216 09:46:48.993770 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:48 crc kubenswrapper[4814]: E0216 09:46:48.993841 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:48 crc kubenswrapper[4814]: E0216 09:46:48.993929 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:48 crc kubenswrapper[4814]: E0216 09:46:48.994041 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:48 crc kubenswrapper[4814]: I0216 09:46:48.995296 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:17:26.414496887 +0000 UTC Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.044763 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.044832 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.044850 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.044876 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.044891 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.147600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.147651 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.147667 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.147689 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.147703 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.251236 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.251299 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.251316 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.251344 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.251362 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.354009 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.354071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.354086 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.354108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.354120 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.458771 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.458869 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.458895 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.458933 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.458955 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.561388 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.561455 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.561471 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.561492 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.561507 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.664584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.664633 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.664645 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.664664 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.664677 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.767618 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.767694 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.767707 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.767755 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.767771 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.834993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.835089 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.835113 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.835147 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.835171 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: E0216 09:46:49.857940 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.863853 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.863920 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.863938 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.863968 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.863988 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: E0216 09:46:49.891120 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.896940 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.896995 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.897012 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.897038 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.897057 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: E0216 09:46:49.912686 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.917255 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.917562 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.917719 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.917853 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.917967 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: E0216 09:46:49.937018 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.942071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.942283 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.942393 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.942495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.942607 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: E0216 09:46:49.963781 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:49Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:49 crc kubenswrapper[4814]: E0216 09:46:49.964446 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.966660 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.966787 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.966878 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.966992 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.967078 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:49Z","lastTransitionTime":"2026-02-16T09:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:49 crc kubenswrapper[4814]: I0216 09:46:49.995427 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:16:14.693685241 +0000 UTC Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.070035 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.070398 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.070565 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.076073 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.076194 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:50Z","lastTransitionTime":"2026-02-16T09:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.179160 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.179234 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.179247 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.179264 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.179274 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:50Z","lastTransitionTime":"2026-02-16T09:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.281649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.281695 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.281710 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.281726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.281738 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:50Z","lastTransitionTime":"2026-02-16T09:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.384052 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.384129 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.384151 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.384179 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.384198 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:50Z","lastTransitionTime":"2026-02-16T09:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.486436 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.486509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.486527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.486603 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.486621 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:50Z","lastTransitionTime":"2026-02-16T09:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.590133 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.590382 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.590452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.590817 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.590904 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:50Z","lastTransitionTime":"2026-02-16T09:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.693679 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.693984 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.694049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.694117 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.694236 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:50Z","lastTransitionTime":"2026-02-16T09:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.797518 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.797604 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.797624 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.797656 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.797681 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:50Z","lastTransitionTime":"2026-02-16T09:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.900091 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.900129 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.900139 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.900156 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.900167 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:50Z","lastTransitionTime":"2026-02-16T09:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.993366 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.993419 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.993367 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:50 crc kubenswrapper[4814]: E0216 09:46:50.993499 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:50 crc kubenswrapper[4814]: E0216 09:46:50.993629 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.993647 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:50 crc kubenswrapper[4814]: E0216 09:46:50.993761 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:50 crc kubenswrapper[4814]: E0216 09:46:50.993863 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:50 crc kubenswrapper[4814]: I0216 09:46:50.996348 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:30:49.988129043 +0000 UTC Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.002557 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.002758 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.002918 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.003075 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.003204 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:51Z","lastTransitionTime":"2026-02-16T09:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.046528 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:51 crc kubenswrapper[4814]: E0216 09:46:51.046772 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:51 crc kubenswrapper[4814]: E0216 09:46:51.047224 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs podName:83343376-433f-46da-b90f-9e1dd9270ea4 nodeName:}" failed. No retries permitted until 2026-02-16 09:47:23.047194466 +0000 UTC m=+100.740350676 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs") pod "network-metrics-daemon-l9dlr" (UID: "83343376-433f-46da-b90f-9e1dd9270ea4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.106114 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.106149 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.106160 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.106178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.106190 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:51Z","lastTransitionTime":"2026-02-16T09:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.209410 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.209935 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.210170 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.210389 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.210637 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:51Z","lastTransitionTime":"2026-02-16T09:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.314118 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.314440 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.314659 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.314819 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.315363 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:51Z","lastTransitionTime":"2026-02-16T09:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.419302 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.419689 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.419909 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.420124 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.420314 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:51Z","lastTransitionTime":"2026-02-16T09:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.523274 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.523339 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.523362 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.523391 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.523414 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:51Z","lastTransitionTime":"2026-02-16T09:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.625926 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.625993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.626006 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.626026 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.626038 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:51Z","lastTransitionTime":"2026-02-16T09:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.729307 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.729371 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.729395 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.729428 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.729450 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:51Z","lastTransitionTime":"2026-02-16T09:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.833659 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.833709 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.833726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.833747 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.833761 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:51Z","lastTransitionTime":"2026-02-16T09:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.936811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.936906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.936929 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.936956 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.936975 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:51Z","lastTransitionTime":"2026-02-16T09:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:51 crc kubenswrapper[4814]: I0216 09:46:51.997417 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:09:44.304483444 +0000 UTC Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.039578 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.039648 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.039667 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.039694 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.039724 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:52Z","lastTransitionTime":"2026-02-16T09:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.143169 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.143251 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.143273 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.143300 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.143321 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:52Z","lastTransitionTime":"2026-02-16T09:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.245354 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.245412 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.245424 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.245448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.245458 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:52Z","lastTransitionTime":"2026-02-16T09:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.348601 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.348670 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.348682 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.348700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.348713 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:52Z","lastTransitionTime":"2026-02-16T09:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.451844 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.451924 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.451945 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.451975 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.451995 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:52Z","lastTransitionTime":"2026-02-16T09:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.554636 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.554838 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.554852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.554872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.554883 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:52Z","lastTransitionTime":"2026-02-16T09:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.658347 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.658398 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.658407 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.658425 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.658436 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:52Z","lastTransitionTime":"2026-02-16T09:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.761430 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.761487 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.761499 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.761521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.761563 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:52Z","lastTransitionTime":"2026-02-16T09:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.863996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.864065 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.864076 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.864096 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.864107 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:52Z","lastTransitionTime":"2026-02-16T09:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.967183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.967229 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.967242 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.967259 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.967272 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:52Z","lastTransitionTime":"2026-02-16T09:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.994052 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.993962 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.993357 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.994298 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:52 crc kubenswrapper[4814]: E0216 09:46:52.994383 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:52 crc kubenswrapper[4814]: E0216 09:46:52.994618 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:52 crc kubenswrapper[4814]: E0216 09:46:52.994778 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:52 crc kubenswrapper[4814]: E0216 09:46:52.995028 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:52 crc kubenswrapper[4814]: I0216 09:46:52.997659 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:57:46.144585273 +0000 UTC Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.012767 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.024583 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.042571 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"message\\\":\\\"2.173493 6489 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:32.173600 6489 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:32.173612 6489 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:32.173621 6489 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:32.173631 6489 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:32.173655 6489 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:32.173659 6489 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:32.173626 6489 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:32.173693 6489 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:32.173708 6489 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:32.173735 6489 factory.go:656] Stopping watch factory\\\\nI0216 09:46:32.173755 6489 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:46:32.173775 6489 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:32.173780 6489 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.054012 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.068840 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.070220 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.070349 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.070426 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.070547 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.070658 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:53Z","lastTransitionTime":"2026-02-16T09:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.081240 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.095325 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.108840 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.119088 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.130971 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.144788 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.156684 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.165043 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.174502 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.174564 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.174578 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.174597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.174611 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:53Z","lastTransitionTime":"2026-02-16T09:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.176049 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.189458 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.212151 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.225319 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:53Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.276511 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.276557 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.276568 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.276587 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.276598 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:53Z","lastTransitionTime":"2026-02-16T09:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.378406 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.378445 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.378455 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.378472 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.378483 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:53Z","lastTransitionTime":"2026-02-16T09:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.482196 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.482768 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.482941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.483073 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.483197 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:53Z","lastTransitionTime":"2026-02-16T09:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.586462 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.586872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.587026 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.587162 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.587283 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:53Z","lastTransitionTime":"2026-02-16T09:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.690959 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.691015 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.691033 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.691131 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.691211 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:53Z","lastTransitionTime":"2026-02-16T09:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.794235 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.794279 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.794297 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.794318 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.794333 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:53Z","lastTransitionTime":"2026-02-16T09:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.903984 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.904047 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.904059 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.904079 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.904119 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:53Z","lastTransitionTime":"2026-02-16T09:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:53 crc kubenswrapper[4814]: I0216 09:46:53.998637 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:59:29.311702776 +0000 UTC Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.007305 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.007359 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.007380 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.007407 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.007426 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:54Z","lastTransitionTime":"2026-02-16T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.110669 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.110727 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.110740 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.110797 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.110815 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:54Z","lastTransitionTime":"2026-02-16T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.213997 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.214048 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.214060 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.214080 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.214095 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:54Z","lastTransitionTime":"2026-02-16T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.318425 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.318486 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.318498 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.318522 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.318553 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:54Z","lastTransitionTime":"2026-02-16T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.421975 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.422032 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.422048 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.422072 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.422088 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:54Z","lastTransitionTime":"2026-02-16T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.499283 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwtrg_419c1fde-3a56-45c4-b6aa-5c5b8cde8db6/kube-multus/0.log" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.499673 4814 generic.go:334] "Generic (PLEG): container finished" podID="419c1fde-3a56-45c4-b6aa-5c5b8cde8db6" containerID="e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919" exitCode=1 Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.499791 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwtrg" event={"ID":"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6","Type":"ContainerDied","Data":"e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919"} Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.500503 4814 scope.go:117] "RemoveContainer" containerID="e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.541784 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.542278 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.542290 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.542309 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.542322 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:54Z","lastTransitionTime":"2026-02-16T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.543765 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.558798 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.582863 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"message\\\":\\\"2.173493 6489 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:32.173600 6489 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:32.173612 6489 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:32.173621 6489 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:32.173631 6489 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:32.173655 6489 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:32.173659 6489 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:32.173626 6489 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:32.173693 6489 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:32.173708 6489 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:32.173735 6489 factory.go:656] Stopping watch factory\\\\nI0216 09:46:32.173755 6489 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:46:32.173775 6489 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:32.173780 6489 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.597382 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.612382 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.628935 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.645776 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.645824 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.645836 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.645858 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.645872 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:54Z","lastTransitionTime":"2026-02-16T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.646569 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.661269 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.673259 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.688085 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.702768 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.717834 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.728786 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.742249 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:54Z\\\",\\\"message\\\":\\\"2026-02-16T09:46:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429\\\\n2026-02-16T09:46:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429 to /host/opt/cni/bin/\\\\n2026-02-16T09:46:08Z [verbose] multus-daemon started\\\\n2026-02-16T09:46:08Z [verbose] Readiness Indicator file check\\\\n2026-02-16T09:46:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.753113 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.753166 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.753224 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.753253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.753269 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:54Z","lastTransitionTime":"2026-02-16T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.753295 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.770501 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.783215 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:54Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.856871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.856897 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.856906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.856922 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.856932 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:54Z","lastTransitionTime":"2026-02-16T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.959178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.959207 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.959217 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.959233 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.959242 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:54Z","lastTransitionTime":"2026-02-16T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.994817 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:54 crc kubenswrapper[4814]: E0216 09:46:54.994912 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.994963 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:54 crc kubenswrapper[4814]: E0216 09:46:54.995001 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.995035 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:54 crc kubenswrapper[4814]: E0216 09:46:54.995072 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.995106 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:54 crc kubenswrapper[4814]: E0216 09:46:54.995141 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:54 crc kubenswrapper[4814]: I0216 09:46:54.999340 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:14:12.396926419 +0000 UTC Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.061136 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.061166 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.061174 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.061189 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.061197 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:55Z","lastTransitionTime":"2026-02-16T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.164650 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.164682 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.164692 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.164710 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.164723 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:55Z","lastTransitionTime":"2026-02-16T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.267713 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.267766 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.267784 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.267806 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.267845 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:55Z","lastTransitionTime":"2026-02-16T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.372521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.372564 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.372572 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.372587 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.372597 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:55Z","lastTransitionTime":"2026-02-16T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.475588 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.475636 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.475645 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.475663 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.475680 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:55Z","lastTransitionTime":"2026-02-16T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.506432 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwtrg_419c1fde-3a56-45c4-b6aa-5c5b8cde8db6/kube-multus/0.log" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.506513 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwtrg" event={"ID":"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6","Type":"ContainerStarted","Data":"cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630"} Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.523908 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.537743 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.554160 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.566120 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.577489 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.578034 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.578081 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.578094 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.578112 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.578124 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:55Z","lastTransitionTime":"2026-02-16T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.594492 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.609397 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.625110 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.635342 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.649451 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:54Z\\\",\\\"message\\\":\\\"2026-02-16T09:46:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429\\\\n2026-02-16T09:46:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429 to /host/opt/cni/bin/\\\\n2026-02-16T09:46:08Z [verbose] multus-daemon started\\\\n2026-02-16T09:46:08Z [verbose] Readiness Indicator file check\\\\n2026-02-16T09:46:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.663915 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.680545 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.680587 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.680601 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.680621 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.680632 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:55Z","lastTransitionTime":"2026-02-16T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.684745 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.701400 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.715316 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.731149 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.754072 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"message\\\":\\\"2.173493 6489 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:32.173600 6489 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:32.173612 6489 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:32.173621 6489 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:32.173631 6489 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:32.173655 6489 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:32.173659 6489 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:32.173626 6489 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:32.173693 6489 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:32.173708 6489 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:32.173735 6489 factory.go:656] Stopping watch factory\\\\nI0216 09:46:32.173755 6489 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:46:32.173775 6489 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:32.173780 6489 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.768968 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:46:55Z is after 2025-08-24T17:21:41Z" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.782896 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.782932 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.782943 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.782960 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.782971 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:55Z","lastTransitionTime":"2026-02-16T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.885689 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.885775 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.885793 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.885826 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.885846 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:55Z","lastTransitionTime":"2026-02-16T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.989899 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.989942 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.989954 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.989974 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:55 crc kubenswrapper[4814]: I0216 09:46:55.989987 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:55Z","lastTransitionTime":"2026-02-16T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.000444 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:01:23.2265132 +0000 UTC Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.099632 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.099707 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.099722 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.099742 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.099757 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:56Z","lastTransitionTime":"2026-02-16T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.202931 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.203011 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.203031 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.203066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.203086 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:56Z","lastTransitionTime":"2026-02-16T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.306404 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.306492 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.306513 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.306591 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.306630 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:56Z","lastTransitionTime":"2026-02-16T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.408904 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.408967 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.408985 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.409011 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.409029 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:56Z","lastTransitionTime":"2026-02-16T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.510498 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.510645 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.510666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.510692 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.510709 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:56Z","lastTransitionTime":"2026-02-16T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.613086 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.613126 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.613138 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.613159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.613177 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:56Z","lastTransitionTime":"2026-02-16T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.722462 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.722519 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.722545 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.722563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.722577 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:56Z","lastTransitionTime":"2026-02-16T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.825138 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.825182 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.825196 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.825211 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.825222 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:56Z","lastTransitionTime":"2026-02-16T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.928509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.928605 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.928624 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.928654 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.928673 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:56Z","lastTransitionTime":"2026-02-16T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.993700 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.993700 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.994447 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:56 crc kubenswrapper[4814]: I0216 09:46:56.994790 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:56 crc kubenswrapper[4814]: E0216 09:46:56.994776 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:56 crc kubenswrapper[4814]: E0216 09:46:56.995121 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:56 crc kubenswrapper[4814]: E0216 09:46:56.995075 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:56 crc kubenswrapper[4814]: E0216 09:46:56.995270 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.000585 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:15:02.88583633 +0000 UTC Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.031050 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.031126 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.031149 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.031178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.031197 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:57Z","lastTransitionTime":"2026-02-16T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.134790 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.134875 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.134894 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.134924 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.134943 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:57Z","lastTransitionTime":"2026-02-16T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.238023 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.238080 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.238093 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.238117 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.238133 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:57Z","lastTransitionTime":"2026-02-16T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.340847 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.341225 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.341312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.341396 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.341494 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:57Z","lastTransitionTime":"2026-02-16T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.445183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.445270 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.445286 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.445306 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.445320 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:57Z","lastTransitionTime":"2026-02-16T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.547877 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.547944 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.547954 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.547977 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.547991 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:57Z","lastTransitionTime":"2026-02-16T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.650718 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.650818 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.650857 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.650891 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.650923 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:57Z","lastTransitionTime":"2026-02-16T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.753712 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.753785 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.753802 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.753826 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.753840 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:57Z","lastTransitionTime":"2026-02-16T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.857897 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.857998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.858022 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.858051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.858070 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:57Z","lastTransitionTime":"2026-02-16T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.961413 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.961487 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.961506 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.961566 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:57 crc kubenswrapper[4814]: I0216 09:46:57.961586 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:57Z","lastTransitionTime":"2026-02-16T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.000783 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:32:46.731834617 +0000 UTC Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.008647 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.065635 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.065713 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.065732 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.065765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.065790 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:58Z","lastTransitionTime":"2026-02-16T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.168681 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.168738 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.168754 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.168775 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.168793 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:58Z","lastTransitionTime":"2026-02-16T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.272666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.272735 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.272758 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.272786 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.272805 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:58Z","lastTransitionTime":"2026-02-16T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.376293 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.376377 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.376405 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.376432 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.376450 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:58Z","lastTransitionTime":"2026-02-16T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.478871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.478950 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.478978 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.479005 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.479024 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:58Z","lastTransitionTime":"2026-02-16T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.581193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.581260 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.581279 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.581306 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.581324 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:58Z","lastTransitionTime":"2026-02-16T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.685940 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.686007 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.686025 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.686051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.686069 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:58Z","lastTransitionTime":"2026-02-16T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.789055 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.789106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.789116 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.789135 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.789147 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:58Z","lastTransitionTime":"2026-02-16T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.892579 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.892677 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.892726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.892754 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.892772 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:58Z","lastTransitionTime":"2026-02-16T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.993009 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.993060 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.993111 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.993116 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:46:58 crc kubenswrapper[4814]: E0216 09:46:58.993988 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:46:58 crc kubenswrapper[4814]: E0216 09:46:58.994055 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:46:58 crc kubenswrapper[4814]: E0216 09:46:58.994124 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:46:58 crc kubenswrapper[4814]: E0216 09:46:58.994347 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.995581 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.995639 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.995652 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.995668 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:58 crc kubenswrapper[4814]: I0216 09:46:58.995679 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:58Z","lastTransitionTime":"2026-02-16T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.001390 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:10:45.270850512 +0000 UTC Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.102183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.102301 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.102319 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.102342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.102388 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:59Z","lastTransitionTime":"2026-02-16T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.205336 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.205382 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.205392 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.205407 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.205417 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:59Z","lastTransitionTime":"2026-02-16T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.308028 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.308107 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.308126 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.308155 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.308174 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:59Z","lastTransitionTime":"2026-02-16T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.411402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.411469 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.411479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.411498 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.411510 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:59Z","lastTransitionTime":"2026-02-16T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.515581 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.515666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.515703 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.515735 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.515781 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:59Z","lastTransitionTime":"2026-02-16T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.619627 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.619725 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.619751 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.619785 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.619810 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:59Z","lastTransitionTime":"2026-02-16T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.722506 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.722587 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.722599 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.722617 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.722631 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:59Z","lastTransitionTime":"2026-02-16T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.826004 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.826096 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.826118 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.826145 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.826166 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:59Z","lastTransitionTime":"2026-02-16T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.928942 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.928996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.929009 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.929030 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:46:59 crc kubenswrapper[4814]: I0216 09:46:59.929047 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:46:59Z","lastTransitionTime":"2026-02-16T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.002496 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 09:01:50.255163812 +0000 UTC Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.032558 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.032618 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.032635 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.032661 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.032680 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.095251 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.095300 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.095312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.095330 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.095342 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: E0216 09:47:00.115013 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.121586 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.121645 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.121657 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.121679 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.121692 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: E0216 09:47:00.141686 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.147614 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.147698 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.147717 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.147751 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.147771 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: E0216 09:47:00.170523 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.177193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.177262 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.177292 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.177326 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.177350 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: E0216 09:47:00.197970 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.203477 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.203593 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.203619 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.203649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.203673 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: E0216 09:47:00.222621 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:00Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:00 crc kubenswrapper[4814]: E0216 09:47:00.222773 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.225155 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.225189 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.225216 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.225242 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.225259 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.328903 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.329174 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.329198 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.329231 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.329256 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.432816 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.432874 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.432887 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.432910 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.432928 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.535780 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.535832 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.535842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.535861 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.535874 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.639508 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.640084 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.640325 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.640760 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.641104 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.744370 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.744870 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.745066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.745266 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.745745 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.849262 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.849315 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.849330 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.849352 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.849367 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.952687 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.952765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.952778 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.952796 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.952809 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:00Z","lastTransitionTime":"2026-02-16T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.993376 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.993423 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.993433 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.994052 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:00 crc kubenswrapper[4814]: E0216 09:47:00.994323 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:00 crc kubenswrapper[4814]: E0216 09:47:00.994151 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:00 crc kubenswrapper[4814]: I0216 09:47:00.994698 4814 scope.go:117] "RemoveContainer" containerID="dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc" Feb 16 09:47:00 crc kubenswrapper[4814]: E0216 09:47:00.995006 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:00 crc kubenswrapper[4814]: E0216 09:47:00.995114 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.003313 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:46:58.926322542 +0000 UTC Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.055748 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.056238 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.056939 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.056994 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.057019 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:01Z","lastTransitionTime":"2026-02-16T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.159908 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.159959 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.159979 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.160004 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.160022 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:01Z","lastTransitionTime":"2026-02-16T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.262429 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.262522 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.262563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.262590 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.262605 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:01Z","lastTransitionTime":"2026-02-16T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.274881 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.365697 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.365777 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.365820 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.365858 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.365886 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:01Z","lastTransitionTime":"2026-02-16T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.469927 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.469976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.469985 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.470002 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.470013 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:01Z","lastTransitionTime":"2026-02-16T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.533941 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/2.log" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.538953 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2"} Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.541209 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.566307 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:54Z\\\",\\\"message\\\":\\\"2026-02-16T09:46:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429\\\\n2026-02-16T09:46:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429 to /host/opt/cni/bin/\\\\n2026-02-16T09:46:08Z [verbose] multus-daemon started\\\\n2026-02-16T09:46:08Z [verbose] Readiness Indicator file check\\\\n2026-02-16T09:46:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.578308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.578351 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.578367 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.578415 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.578431 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:01Z","lastTransitionTime":"2026-02-16T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.581810 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.612308 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.626679 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.654303 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.669151 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.681501 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.681547 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.681556 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.681571 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.681582 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:01Z","lastTransitionTime":"2026-02-16T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.693225 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"message\\\":\\\"2.173493 6489 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:32.173600 6489 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:32.173612 6489 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:32.173621 6489 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:32.173631 6489 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:32.173655 6489 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:32.173659 6489 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:32.173626 6489 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:32.173693 6489 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:32.173708 6489 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:32.173735 6489 factory.go:656] Stopping watch factory\\\\nI0216 09:46:32.173755 6489 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:46:32.173775 6489 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:32.173780 6489 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.714686 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.732908 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.747829 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.764728 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.778902 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.784407 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.784454 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.784464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.784482 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.784497 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:01Z","lastTransitionTime":"2026-02-16T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.792497 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.806525 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edbbce62-bfbb-46b0-b48b-8dcc485ccced\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7003af8bdba8161a363e9f39525d354e17f4402fd151424cb692ee75cc0d2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.824720 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.841778 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.857514 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.868796 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:01Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.886652 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.886679 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.886692 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.886710 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.886722 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:01Z","lastTransitionTime":"2026-02-16T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.989242 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.989300 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.989313 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.989344 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:01 crc kubenswrapper[4814]: I0216 09:47:01.989358 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:01Z","lastTransitionTime":"2026-02-16T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.003822 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:38:09.441972483 +0000 UTC Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.092502 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.092561 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.092573 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.092593 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.092607 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:02Z","lastTransitionTime":"2026-02-16T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.195909 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.195967 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.195981 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.196003 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.196017 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:02Z","lastTransitionTime":"2026-02-16T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.299108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.299175 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.299192 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.299219 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.299237 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:02Z","lastTransitionTime":"2026-02-16T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.402715 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.402785 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.402805 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.402836 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.402854 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:02Z","lastTransitionTime":"2026-02-16T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.506869 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.506923 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.506941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.506963 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.506976 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:02Z","lastTransitionTime":"2026-02-16T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.545184 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/3.log" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.546267 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/2.log" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.550526 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2" exitCode=1 Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.550604 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2"} Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.550694 4814 scope.go:117] "RemoveContainer" containerID="dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.551714 4814 scope.go:117] "RemoveContainer" containerID="7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2" Feb 16 09:47:02 crc kubenswrapper[4814]: E0216 09:47:02.551962 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.575424 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.596180 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.609757 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.609817 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.609844 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.609871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.609890 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:02Z","lastTransitionTime":"2026-02-16T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.622012 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:54Z\\\",\\\"message\\\":\\\"2026-02-16T09:46:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429\\\\n2026-02-16T09:46:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429 to /host/opt/cni/bin/\\\\n2026-02-16T09:46:08Z [verbose] multus-daemon started\\\\n2026-02-16T09:46:08Z [verbose] Readiness Indicator file check\\\\n2026-02-16T09:46:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.637208 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.661431 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"message\\\":\\\"2.173493 6489 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:32.173600 6489 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:32.173612 6489 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:32.173621 6489 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:32.173631 6489 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:32.173655 6489 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:32.173659 6489 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:32.173626 6489 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:32.173693 6489 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:32.173708 6489 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:32.173735 6489 factory.go:656] Stopping watch factory\\\\nI0216 09:46:32.173755 6489 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:46:32.173775 6489 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:32.173780 6489 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:47:02Z\\\",\\\"message\\\":\\\"obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-ghlbk after 0 failed attempt(s)\\\\nI0216 09:47:02.083930 6873 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-ghlbk\\\\nI0216 09:47:02.083833 6873 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 09:47:02.084081 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:47:02.084139 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 09:47:02.084179 6873 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 09:47:02.084255 6873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.677828 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.694267 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.708635 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.712578 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.712622 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.712631 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.712649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.712662 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:02Z","lastTransitionTime":"2026-02-16T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.723253 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.741626 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.759210 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.771151 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.785991 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.803009 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.814866 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.814918 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.814928 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.814949 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.814961 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:02Z","lastTransitionTime":"2026-02-16T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.820626 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.833943 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.845668 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edbbce62-bfbb-46b0-b48b-8dcc485ccced\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7003af8bdba8161a363e9f39525d354e17f4402fd151424cb692ee75cc0d2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.860714 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:02Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.917073 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.917130 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.917145 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.917168 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.917182 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:02Z","lastTransitionTime":"2026-02-16T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.993477 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:02 crc kubenswrapper[4814]: E0216 09:47:02.993644 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.993850 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:02 crc kubenswrapper[4814]: E0216 09:47:02.993916 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.994064 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:02 crc kubenswrapper[4814]: E0216 09:47:02.994123 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:02 crc kubenswrapper[4814]: I0216 09:47:02.994227 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:02 crc kubenswrapper[4814]: E0216 09:47:02.994279 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.004923 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:39:20.15696353 +0000 UTC Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.009691 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.020469 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.020508 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.020519 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.020563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.020578 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:03Z","lastTransitionTime":"2026-02-16T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.024124 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.039056 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.070998 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc6f52ee0117346a26265e5b8bb3d88fe8c8c82c0dcc0e93beff1904e9495fbc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"message\\\":\\\"2.173493 6489 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 09:46:32.173555 6489 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 09:46:32.173600 6489 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 09:46:32.173612 6489 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 09:46:32.173621 6489 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 09:46:32.173631 6489 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 09:46:32.173655 6489 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 09:46:32.173659 6489 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 09:46:32.173626 6489 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 09:46:32.173693 6489 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 09:46:32.173708 6489 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 09:46:32.173735 6489 factory.go:656] Stopping watch factory\\\\nI0216 09:46:32.173755 6489 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:46:32.173775 6489 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 09:46:32.173780 6489 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 09\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:47:02Z\\\",\\\"message\\\":\\\"obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-ghlbk after 0 failed attempt(s)\\\\nI0216 09:47:02.083930 6873 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-ghlbk\\\\nI0216 09:47:02.083833 6873 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 09:47:02.084081 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:47:02.084139 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 09:47:02.084179 6873 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 09:47:02.084255 6873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.090276 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.118259 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.124803 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.124912 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.124976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.125006 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.125067 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:03Z","lastTransitionTime":"2026-02-16T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.135689 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.156152 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.172586 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.189754 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.203452 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.216359 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edbbce62-bfbb-46b0-b48b-8dcc485ccced\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7003af8bdba8161a363e9f39525d354e17f4402fd151424cb692ee75cc0d2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.228131 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.228175 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.228190 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.228214 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.228231 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:03Z","lastTransitionTime":"2026-02-16T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.235061 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.251277 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.270381 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.291280 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:54Z\\\",\\\"message\\\":\\\"2026-02-16T09:46:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429\\\\n2026-02-16T09:46:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429 to /host/opt/cni/bin/\\\\n2026-02-16T09:46:08Z [verbose] multus-daemon started\\\\n2026-02-16T09:46:08Z [verbose] Readiness Indicator file check\\\\n2026-02-16T09:46:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.305521 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.319909 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.330889 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.330915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.330926 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.330941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.330951 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:03Z","lastTransitionTime":"2026-02-16T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.433071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.433114 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.433127 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.433148 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.433158 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:03Z","lastTransitionTime":"2026-02-16T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.535616 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.535663 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.535674 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.535693 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.535707 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:03Z","lastTransitionTime":"2026-02-16T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.556677 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/3.log" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.561928 4814 scope.go:117] "RemoveContainer" containerID="7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2" Feb 16 09:47:03 crc kubenswrapper[4814]: E0216 09:47:03.562376 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.574904 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edbbce62-bfbb-46b0-b48b-8dcc485ccced\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7003af8bdba8161a363e9f39525d354e17f4402fd151424cb692ee75cc0d2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.591337 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.605286 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.619679 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.630779 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.638047 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.638108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.638125 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.638148 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.638161 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:03Z","lastTransitionTime":"2026-02-16T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.643078 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:54Z\\\",\\\"message\\\":\\\"2026-02-16T09:46:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429\\\\n2026-02-16T09:46:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429 to /host/opt/cni/bin/\\\\n2026-02-16T09:46:08Z [verbose] multus-daemon started\\\\n2026-02-16T09:46:08Z [verbose] Readiness Indicator file check\\\\n2026-02-16T09:46:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.655262 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.669375 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.682371 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.703658 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.722566 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.740591 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.740654 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.740671 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.740697 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.740714 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:03Z","lastTransitionTime":"2026-02-16T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.754222 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:47:02Z\\\",\\\"message\\\":\\\"obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-ghlbk after 0 failed attempt(s)\\\\nI0216 09:47:02.083930 6873 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-ghlbk\\\\nI0216 09:47:02.083833 6873 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 09:47:02.084081 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:47:02.084139 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 09:47:02.084179 6873 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 09:47:02.084255 6873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.771053 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.783717 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.796104 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.809101 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.820230 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.830741 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:03Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.844069 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.844118 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.844133 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.844152 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.844163 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:03Z","lastTransitionTime":"2026-02-16T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.946603 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.946641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.946649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.946666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:03 crc kubenswrapper[4814]: I0216 09:47:03.946677 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:03Z","lastTransitionTime":"2026-02-16T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.005224 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:25:20.717239763 +0000 UTC Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.050876 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.050919 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.050929 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.050945 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.050955 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:04Z","lastTransitionTime":"2026-02-16T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.154260 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.154313 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.154327 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.154348 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.154362 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:04Z","lastTransitionTime":"2026-02-16T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.258699 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.258746 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.258755 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.258773 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.258783 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:04Z","lastTransitionTime":"2026-02-16T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.362460 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.362512 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.362522 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.362560 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.362575 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:04Z","lastTransitionTime":"2026-02-16T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.466095 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.466177 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.466202 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.466232 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.466251 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:04Z","lastTransitionTime":"2026-02-16T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.569206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.569274 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.569489 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.569569 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.569596 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:04Z","lastTransitionTime":"2026-02-16T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.672818 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.673166 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.673323 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.673526 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.673730 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:04Z","lastTransitionTime":"2026-02-16T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.777108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.777705 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.777966 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.778275 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.778625 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:04Z","lastTransitionTime":"2026-02-16T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.882438 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.882516 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.882570 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.882601 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.882625 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:04Z","lastTransitionTime":"2026-02-16T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.986067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.986140 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.986172 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.986205 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.986224 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:04Z","lastTransitionTime":"2026-02-16T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.993162 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.993403 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:04 crc kubenswrapper[4814]: E0216 09:47:04.993696 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.993319 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:04 crc kubenswrapper[4814]: I0216 09:47:04.993295 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:04 crc kubenswrapper[4814]: E0216 09:47:04.993898 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:04 crc kubenswrapper[4814]: E0216 09:47:04.994113 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:04 crc kubenswrapper[4814]: E0216 09:47:04.995191 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.005718 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:56:04.844833353 +0000 UTC Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.089936 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.090029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.090091 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.090125 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.090187 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:05Z","lastTransitionTime":"2026-02-16T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.192946 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.193003 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.193022 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.193049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.193067 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:05Z","lastTransitionTime":"2026-02-16T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.296494 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.296614 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.296639 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.296675 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.296699 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:05Z","lastTransitionTime":"2026-02-16T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.400646 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.400697 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.400711 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.400736 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.400750 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:05Z","lastTransitionTime":"2026-02-16T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.504904 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.505310 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.505466 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.505640 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.505765 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:05Z","lastTransitionTime":"2026-02-16T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.608595 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.608905 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.608974 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.609042 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.609129 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:05Z","lastTransitionTime":"2026-02-16T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.712759 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.713356 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.713371 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.713394 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.713411 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:05Z","lastTransitionTime":"2026-02-16T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.816915 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.817014 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.817036 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.817558 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.817619 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:05Z","lastTransitionTime":"2026-02-16T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.921566 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.921648 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.921668 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.921697 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:05 crc kubenswrapper[4814]: I0216 09:47:05.921717 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:05Z","lastTransitionTime":"2026-02-16T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.005858 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:14:52.53561412 +0000 UTC Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.024867 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.024931 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.024948 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.024974 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.024992 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:06Z","lastTransitionTime":"2026-02-16T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.128154 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.128214 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.128233 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.128259 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.128278 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:06Z","lastTransitionTime":"2026-02-16T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.231740 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.231810 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.231826 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.231853 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.231873 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:06Z","lastTransitionTime":"2026-02-16T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.334035 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.334089 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.334107 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.334133 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.334151 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:06Z","lastTransitionTime":"2026-02-16T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.436695 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.436752 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.436765 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.436787 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.436804 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:06Z","lastTransitionTime":"2026-02-16T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.539943 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.540007 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.540024 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.540051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.540072 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:06Z","lastTransitionTime":"2026-02-16T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.644753 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.645255 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.645456 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.645762 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.645973 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:06Z","lastTransitionTime":"2026-02-16T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.745976 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.746137 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:10.746110134 +0000 UTC m=+148.439266314 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.746188 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.746257 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.746333 4814 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.746378 4814 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.746397 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:10.746383471 +0000 UTC m=+148.439539661 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.746413 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:10.746404961 +0000 UTC m=+148.439561141 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.748729 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.748764 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.748777 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.748796 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.748812 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:06Z","lastTransitionTime":"2026-02-16T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.847308 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.847410 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.847675 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.847709 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.847731 4814 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.847810 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:10.847784689 +0000 UTC m=+148.540940919 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.848116 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.848154 4814 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.848170 4814 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.848237 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:10.84822118 +0000 UTC m=+148.541377400 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.852050 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.852094 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.852106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.852129 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.852143 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:06Z","lastTransitionTime":"2026-02-16T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.954716 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.954778 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.954795 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.954818 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.954834 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:06Z","lastTransitionTime":"2026-02-16T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.992769 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.992873 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.992991 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.993189 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:06 crc kubenswrapper[4814]: I0216 09:47:06.993225 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.993421 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.993409 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:06 crc kubenswrapper[4814]: E0216 09:47:06.993581 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.006563 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:32:13.457127571 +0000 UTC Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.058012 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.058085 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.058109 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.058146 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.058173 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:07Z","lastTransitionTime":"2026-02-16T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.161186 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.161243 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.161262 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.161283 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.161300 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:07Z","lastTransitionTime":"2026-02-16T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.264735 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.264783 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.264794 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.264811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.264823 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:07Z","lastTransitionTime":"2026-02-16T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.367323 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.367407 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.367421 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.367439 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.367453 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:07Z","lastTransitionTime":"2026-02-16T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.470976 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.471047 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.471067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.471098 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.471118 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:07Z","lastTransitionTime":"2026-02-16T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.574853 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.574938 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.574962 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.574993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.575016 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:07Z","lastTransitionTime":"2026-02-16T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.678340 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.678411 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.678428 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.678452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.678467 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:07Z","lastTransitionTime":"2026-02-16T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.790703 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.790779 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.790798 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.790827 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.790850 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:07Z","lastTransitionTime":"2026-02-16T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.894198 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.894267 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.894280 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.894299 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.894313 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:07Z","lastTransitionTime":"2026-02-16T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.997246 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.997308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.997322 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.997341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:07 crc kubenswrapper[4814]: I0216 09:47:07.997354 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:07Z","lastTransitionTime":"2026-02-16T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.007497 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:03:30.778798605 +0000 UTC Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.100070 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.100156 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.100172 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.100196 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.100211 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:08Z","lastTransitionTime":"2026-02-16T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.203792 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.203835 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.203845 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.203860 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.203871 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:08Z","lastTransitionTime":"2026-02-16T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.307371 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.307410 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.307419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.307435 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.307445 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:08Z","lastTransitionTime":"2026-02-16T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.411056 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.411126 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.411147 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.411177 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.411198 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:08Z","lastTransitionTime":"2026-02-16T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.515164 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.515230 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.515241 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.515262 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.515276 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:08Z","lastTransitionTime":"2026-02-16T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.618302 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.618351 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.618361 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.618382 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.618391 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:08Z","lastTransitionTime":"2026-02-16T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.721119 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.721187 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.721207 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.721232 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.721249 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:08Z","lastTransitionTime":"2026-02-16T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.824504 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.824614 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.824633 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.824713 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.824737 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:08Z","lastTransitionTime":"2026-02-16T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.927728 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.927792 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.927809 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.927838 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.927856 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:08Z","lastTransitionTime":"2026-02-16T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.993478 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.993577 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:08 crc kubenswrapper[4814]: E0216 09:47:08.993786 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:08 crc kubenswrapper[4814]: E0216 09:47:08.993968 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.994227 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:08 crc kubenswrapper[4814]: I0216 09:47:08.994342 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:08 crc kubenswrapper[4814]: E0216 09:47:08.994692 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:08 crc kubenswrapper[4814]: E0216 09:47:08.995218 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.007766 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:38:42.379701917 +0000 UTC Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.031135 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.031462 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.031680 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.031729 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.031771 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:09Z","lastTransitionTime":"2026-02-16T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.135618 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.135671 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.135688 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.135713 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.135729 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:09Z","lastTransitionTime":"2026-02-16T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.238982 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.239513 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.239600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.239635 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.239723 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:09Z","lastTransitionTime":"2026-02-16T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.344763 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.344833 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.344849 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.344868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.344882 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:09Z","lastTransitionTime":"2026-02-16T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.448282 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.448341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.448359 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.448387 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.448415 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:09Z","lastTransitionTime":"2026-02-16T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.552316 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.552372 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.552394 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.552426 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.552447 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:09Z","lastTransitionTime":"2026-02-16T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.656584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.656650 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.656663 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.656684 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.656694 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:09Z","lastTransitionTime":"2026-02-16T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.759597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.759668 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.759689 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.759716 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.759737 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:09Z","lastTransitionTime":"2026-02-16T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.863016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.863091 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.863110 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.863143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.863163 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:09Z","lastTransitionTime":"2026-02-16T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.967711 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.967771 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.967782 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.967803 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:09 crc kubenswrapper[4814]: I0216 09:47:09.967817 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:09Z","lastTransitionTime":"2026-02-16T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.008181 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:47:08.186790879 +0000 UTC Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.070316 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.070385 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.070402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.070423 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.070439 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.173594 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.173637 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.173646 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.173661 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.173671 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.277237 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.277320 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.277340 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.277368 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.277389 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.381159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.381237 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.381252 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.381275 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.381327 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.484419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.484482 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.484500 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.484526 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.484582 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.572302 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.572378 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.572402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.572439 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.572466 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: E0216 09:47:10.592637 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.598419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.598483 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.598502 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.598557 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.598580 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: E0216 09:47:10.621043 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.627322 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.627438 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.627465 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.627502 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.627527 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: E0216 09:47:10.647044 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.653226 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.653312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.653341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.653376 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.653402 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: E0216 09:47:10.676986 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.683651 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.683710 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.683725 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.683747 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.683762 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: E0216 09:47:10.709832 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:10Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:10 crc kubenswrapper[4814]: E0216 09:47:10.710123 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.713179 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.713287 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.713313 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.713355 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.713382 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.817179 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.817273 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.817298 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.817327 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.817347 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.920785 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.920893 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.920908 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.920954 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.920979 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:10Z","lastTransitionTime":"2026-02-16T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.992763 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.992844 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.992951 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:10 crc kubenswrapper[4814]: I0216 09:47:10.992952 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:10 crc kubenswrapper[4814]: E0216 09:47:10.993106 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:10 crc kubenswrapper[4814]: E0216 09:47:10.993331 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:10 crc kubenswrapper[4814]: E0216 09:47:10.993606 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:10 crc kubenswrapper[4814]: E0216 09:47:10.993837 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.009091 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:52:32.679648419 +0000 UTC Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.024143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.024215 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.024248 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.024291 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.024315 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:11Z","lastTransitionTime":"2026-02-16T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.127909 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.127968 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.127982 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.128004 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.128017 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:11Z","lastTransitionTime":"2026-02-16T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.231116 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.231167 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.231182 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.231203 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.231222 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:11Z","lastTransitionTime":"2026-02-16T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.334960 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.335043 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.335069 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.335103 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.335127 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:11Z","lastTransitionTime":"2026-02-16T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.438859 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.438949 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.438968 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.438996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.439015 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:11Z","lastTransitionTime":"2026-02-16T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.542162 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.542237 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.542261 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.542296 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.542320 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:11Z","lastTransitionTime":"2026-02-16T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.646201 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.646293 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.646323 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.646357 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.646387 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:11Z","lastTransitionTime":"2026-02-16T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.750391 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.750463 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.750488 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.750521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.750584 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:11Z","lastTransitionTime":"2026-02-16T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.854337 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.854401 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.854419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.854446 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.854464 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:11Z","lastTransitionTime":"2026-02-16T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.957226 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.957289 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.957308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.957336 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:11 crc kubenswrapper[4814]: I0216 09:47:11.957354 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:11Z","lastTransitionTime":"2026-02-16T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.009644 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:38:38.633773649 +0000 UTC Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.061202 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.061291 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.061312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.061342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.061364 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:12Z","lastTransitionTime":"2026-02-16T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.163974 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.164029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.164040 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.164062 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.164080 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:12Z","lastTransitionTime":"2026-02-16T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.267263 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.267331 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.267356 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.267386 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.267407 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:12Z","lastTransitionTime":"2026-02-16T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.372968 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.373053 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.373065 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.373083 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.373097 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:12Z","lastTransitionTime":"2026-02-16T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.477184 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.477256 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.477276 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.477313 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.477336 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:12Z","lastTransitionTime":"2026-02-16T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.581173 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.581232 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.581242 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.581259 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.581272 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:12Z","lastTransitionTime":"2026-02-16T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.686841 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.686892 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.686904 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.686925 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.686939 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:12Z","lastTransitionTime":"2026-02-16T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.789906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.789972 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.789996 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.790018 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.790038 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:12Z","lastTransitionTime":"2026-02-16T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.894161 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.894267 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.894286 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.894349 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.894369 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:12Z","lastTransitionTime":"2026-02-16T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.993248 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:12 crc kubenswrapper[4814]: E0216 09:47:12.993477 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.994092 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.994189 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:12 crc kubenswrapper[4814]: E0216 09:47:12.994273 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.994421 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:12 crc kubenswrapper[4814]: E0216 09:47:12.994598 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:12 crc kubenswrapper[4814]: E0216 09:47:12.994859 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.997731 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.997766 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.997780 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.997802 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:12 crc kubenswrapper[4814]: I0216 09:47:12.997815 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:12Z","lastTransitionTime":"2026-02-16T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.009885 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:11:21.43476573 +0000 UTC Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.018760 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.039335 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.061439 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.079055 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.096052 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.102292 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.102319 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.102346 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.102363 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.102373 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:13Z","lastTransitionTime":"2026-02-16T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.116026 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edbbce62-bfbb-46b0-b48b-8dcc485ccced\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7003af8bdba8161a363e9f39525d354e17f4402fd151424cb692ee75cc0d2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.133489 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.152668 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.176312 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.188274 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.203023 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:54Z\\\",\\\"message\\\":\\\"2026-02-16T09:46:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429\\\\n2026-02-16T09:46:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429 to /host/opt/cni/bin/\\\\n2026-02-16T09:46:08Z [verbose] multus-daemon started\\\\n2026-02-16T09:46:08Z [verbose] Readiness Indicator file check\\\\n2026-02-16T09:46:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.205159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.205202 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.205218 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.205241 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.205399 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:13Z","lastTransitionTime":"2026-02-16T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.221239 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.245342 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.263825 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.280553 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.297668 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.309173 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.309225 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.309235 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.309254 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.309267 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:13Z","lastTransitionTime":"2026-02-16T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.319488 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:47:02Z\\\",\\\"message\\\":\\\"obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-ghlbk after 0 failed attempt(s)\\\\nI0216 09:47:02.083930 6873 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-ghlbk\\\\nI0216 09:47:02.083833 6873 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 09:47:02.084081 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:47:02.084139 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 09:47:02.084179 6873 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 09:47:02.084255 6873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.334376 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:13Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.412370 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.412432 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.412446 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.412469 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.412486 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:13Z","lastTransitionTime":"2026-02-16T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.516114 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.516684 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.516795 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.516884 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.516945 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:13Z","lastTransitionTime":"2026-02-16T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.620251 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.620311 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.620323 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.620342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.620355 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:13Z","lastTransitionTime":"2026-02-16T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.723462 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.724002 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.724254 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.724416 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.724583 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:13Z","lastTransitionTime":"2026-02-16T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.828504 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.828794 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.828850 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.828885 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.828905 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:13Z","lastTransitionTime":"2026-02-16T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.932600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.932676 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.932699 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.932732 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:13 crc kubenswrapper[4814]: I0216 09:47:13.932753 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:13Z","lastTransitionTime":"2026-02-16T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.010868 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:37:25.536042972 +0000 UTC Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.036568 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.036621 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.036638 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.036661 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.036677 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:14Z","lastTransitionTime":"2026-02-16T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.140123 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.140215 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.140241 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.140277 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.140302 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:14Z","lastTransitionTime":"2026-02-16T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.243290 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.243354 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.243374 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.243401 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.243419 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:14Z","lastTransitionTime":"2026-02-16T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.346629 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.346688 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.346701 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.346720 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.346735 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:14Z","lastTransitionTime":"2026-02-16T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.449451 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.449503 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.449515 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.449569 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.449588 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:14Z","lastTransitionTime":"2026-02-16T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.552317 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.552364 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.552376 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.552395 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.552408 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:14Z","lastTransitionTime":"2026-02-16T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.655906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.655966 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.655979 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.656000 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.656016 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:14Z","lastTransitionTime":"2026-02-16T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.758955 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.759019 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.759036 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.759064 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.759082 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:14Z","lastTransitionTime":"2026-02-16T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.861828 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.861880 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.861898 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.861918 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.861932 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:14Z","lastTransitionTime":"2026-02-16T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.965757 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.965847 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.965872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.965906 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.965927 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:14Z","lastTransitionTime":"2026-02-16T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.992563 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.992661 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.992679 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:14 crc kubenswrapper[4814]: E0216 09:47:14.992712 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:14 crc kubenswrapper[4814]: I0216 09:47:14.992743 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:14 crc kubenswrapper[4814]: E0216 09:47:14.992791 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:14 crc kubenswrapper[4814]: E0216 09:47:14.992849 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:14 crc kubenswrapper[4814]: E0216 09:47:14.992896 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.012070 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:31:04.078607608 +0000 UTC Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.068392 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.068427 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.068438 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.068454 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.068467 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:15Z","lastTransitionTime":"2026-02-16T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.171333 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.171394 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.171417 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.171445 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.171468 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:15Z","lastTransitionTime":"2026-02-16T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.274312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.274375 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.274392 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.274420 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.274443 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:15Z","lastTransitionTime":"2026-02-16T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.377566 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.377840 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.377907 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.377978 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.378043 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:15Z","lastTransitionTime":"2026-02-16T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.481061 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.481142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.481166 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.481194 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.481218 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:15Z","lastTransitionTime":"2026-02-16T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.585247 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.585312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.585330 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.585359 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.585378 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:15Z","lastTransitionTime":"2026-02-16T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.692363 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.692419 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.692437 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.692464 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.692484 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:15Z","lastTransitionTime":"2026-02-16T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.796067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.796129 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.796148 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.796174 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.796190 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:15Z","lastTransitionTime":"2026-02-16T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.899711 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.899779 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.899796 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.899825 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.899847 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:15Z","lastTransitionTime":"2026-02-16T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:15 crc kubenswrapper[4814]: I0216 09:47:15.994029 4814 scope.go:117] "RemoveContainer" containerID="7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2" Feb 16 09:47:15 crc kubenswrapper[4814]: E0216 09:47:15.994342 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.003249 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.003299 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.003371 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.003427 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.003451 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:16Z","lastTransitionTime":"2026-02-16T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.012851 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:34:26.362445008 +0000 UTC Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.105718 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.105782 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.105799 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.105824 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.105844 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:16Z","lastTransitionTime":"2026-02-16T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.209193 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.209265 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.209282 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.209308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.209353 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:16Z","lastTransitionTime":"2026-02-16T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.312198 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.312244 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.312253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.312272 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.312285 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:16Z","lastTransitionTime":"2026-02-16T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.416192 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.416266 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.416283 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.416312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.416334 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:16Z","lastTransitionTime":"2026-02-16T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.519473 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.519527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.519573 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.519596 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.519610 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:16Z","lastTransitionTime":"2026-02-16T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.623107 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.623180 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.623200 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.623228 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.623262 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:16Z","lastTransitionTime":"2026-02-16T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.726269 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.726329 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.726345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.726367 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.726379 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:16Z","lastTransitionTime":"2026-02-16T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.830460 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.830584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.830605 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.830633 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.830651 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:16Z","lastTransitionTime":"2026-02-16T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.933180 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.933229 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.933241 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.933258 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.933268 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:16Z","lastTransitionTime":"2026-02-16T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.993414 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.993480 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.993503 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:16 crc kubenswrapper[4814]: I0216 09:47:16.993429 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:16 crc kubenswrapper[4814]: E0216 09:47:16.993698 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:16 crc kubenswrapper[4814]: E0216 09:47:16.993823 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:16 crc kubenswrapper[4814]: E0216 09:47:16.993976 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:16 crc kubenswrapper[4814]: E0216 09:47:16.994202 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.013705 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:53:11.275247743 +0000 UTC Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.035517 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.035658 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.035686 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.035742 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.035761 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:17Z","lastTransitionTime":"2026-02-16T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.139046 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.139080 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.139089 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.139105 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.139119 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:17Z","lastTransitionTime":"2026-02-16T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.241630 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.241672 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.241681 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.241696 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.241706 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:17Z","lastTransitionTime":"2026-02-16T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.345990 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.346079 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.346104 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.346142 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.346179 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:17Z","lastTransitionTime":"2026-02-16T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.449006 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.449066 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.449076 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.449092 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.449101 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:17Z","lastTransitionTime":"2026-02-16T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.552666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.552722 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.552741 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.552776 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.552800 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:17Z","lastTransitionTime":"2026-02-16T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.656785 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.656848 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.656862 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.656884 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.656902 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:17Z","lastTransitionTime":"2026-02-16T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.760303 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.760385 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.760402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.760483 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.760521 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:17Z","lastTransitionTime":"2026-02-16T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.864572 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.864634 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.864651 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.864676 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.864693 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:17Z","lastTransitionTime":"2026-02-16T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.968322 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.968421 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.968440 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.968463 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:17 crc kubenswrapper[4814]: I0216 09:47:17.968478 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:17Z","lastTransitionTime":"2026-02-16T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.014470 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:44:31.576207514 +0000 UTC Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.071926 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.072001 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.072028 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.072060 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.072086 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:18Z","lastTransitionTime":"2026-02-16T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.175320 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.175400 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.175420 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.175446 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.175466 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:18Z","lastTransitionTime":"2026-02-16T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.278779 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.278845 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.278857 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.278877 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.278892 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:18Z","lastTransitionTime":"2026-02-16T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.381162 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.381242 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.381268 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.381300 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.381327 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:18Z","lastTransitionTime":"2026-02-16T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.485113 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.485174 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.485191 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.485218 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.485236 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:18Z","lastTransitionTime":"2026-02-16T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.589270 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.589343 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.589375 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.589403 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.589421 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:18Z","lastTransitionTime":"2026-02-16T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.693434 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.693476 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.693509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.693527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.693557 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:18Z","lastTransitionTime":"2026-02-16T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.797341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.797416 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.797439 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.797512 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.797571 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:18Z","lastTransitionTime":"2026-02-16T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.900190 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.900287 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.900308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.900335 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.900351 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:18Z","lastTransitionTime":"2026-02-16T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.993217 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.993276 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.993360 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:18 crc kubenswrapper[4814]: I0216 09:47:18.993240 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:18 crc kubenswrapper[4814]: E0216 09:47:18.993511 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:18 crc kubenswrapper[4814]: E0216 09:47:18.993850 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:18 crc kubenswrapper[4814]: E0216 09:47:18.994317 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:18 crc kubenswrapper[4814]: E0216 09:47:18.994221 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.002196 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.002239 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.002253 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.002269 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.002282 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:19Z","lastTransitionTime":"2026-02-16T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.016590 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:48:28.9621455 +0000 UTC Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.105171 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.105232 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.105247 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.105273 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.105290 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:19Z","lastTransitionTime":"2026-02-16T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.207733 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.207770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.207778 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.207793 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.207801 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:19Z","lastTransitionTime":"2026-02-16T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.310804 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.310881 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.310899 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.310924 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.310941 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:19Z","lastTransitionTime":"2026-02-16T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.414052 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.414112 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.414134 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.414164 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.414186 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:19Z","lastTransitionTime":"2026-02-16T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.516252 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.516305 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.516362 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.516390 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.516408 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:19Z","lastTransitionTime":"2026-02-16T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.619416 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.619468 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.619484 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.619509 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.619525 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:19Z","lastTransitionTime":"2026-02-16T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.722339 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.722422 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.722447 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.722481 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.722504 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:19Z","lastTransitionTime":"2026-02-16T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.826230 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.826277 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.826292 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.826310 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.826322 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:19Z","lastTransitionTime":"2026-02-16T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.930120 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.930173 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.930185 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.930206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:19 crc kubenswrapper[4814]: I0216 09:47:19.930218 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:19Z","lastTransitionTime":"2026-02-16T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.017097 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:57:20.826743367 +0000 UTC Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.038619 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.038663 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.038882 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.038900 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.038913 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:20Z","lastTransitionTime":"2026-02-16T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.142603 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.142667 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.142686 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.142711 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.142729 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:20Z","lastTransitionTime":"2026-02-16T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.245872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.245948 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.245973 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.246003 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.246028 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:20Z","lastTransitionTime":"2026-02-16T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.349840 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.349908 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.349927 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.349959 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.349983 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:20Z","lastTransitionTime":"2026-02-16T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.454252 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.454325 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.454345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.454376 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.454392 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:20Z","lastTransitionTime":"2026-02-16T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.557860 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.557935 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.557958 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.557986 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.558004 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:20Z","lastTransitionTime":"2026-02-16T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.661143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.661225 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.661251 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.661286 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.661313 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:20Z","lastTransitionTime":"2026-02-16T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.765000 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.765100 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.765139 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.765175 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.765200 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:20Z","lastTransitionTime":"2026-02-16T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.869005 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.869090 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.869116 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.869148 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.869175 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:20Z","lastTransitionTime":"2026-02-16T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.972345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.972411 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.972430 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.972453 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.972469 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:20Z","lastTransitionTime":"2026-02-16T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.992508 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.992584 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.992472 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.992519 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:20 crc kubenswrapper[4814]: E0216 09:47:20.992705 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:20 crc kubenswrapper[4814]: E0216 09:47:20.992843 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:20 crc kubenswrapper[4814]: E0216 09:47:20.993079 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.993158 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.993177 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.993188 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.993203 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:20 crc kubenswrapper[4814]: I0216 09:47:20.993213 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:20Z","lastTransitionTime":"2026-02-16T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:20 crc kubenswrapper[4814]: E0216 09:47:20.993225 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:21 crc kubenswrapper[4814]: E0216 09:47:21.009890 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:21Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.015108 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.015178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.015190 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.015213 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.015242 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.017270 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:14:36.568158835 +0000 UTC Feb 16 09:47:21 crc kubenswrapper[4814]: E0216 09:47:21.030140 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:21Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.035306 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.035361 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.035377 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.035402 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.035417 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: E0216 09:47:21.049192 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:21Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.054794 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.054938 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.054961 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.054981 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.055017 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: E0216 09:47:21.072013 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:21Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.077641 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.077710 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.077725 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.077750 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.077768 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: E0216 09:47:21.093936 4814 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T09:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fefaad58-c4d3-4766-b042-986d2228ca91\\\",\\\"systemUUID\\\":\\\"229af786-ea3b-485b-b39a-f6a3c0e23f09\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:21Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:21 crc kubenswrapper[4814]: E0216 09:47:21.094137 4814 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.096448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.096495 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.096505 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.096520 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.096548 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.199936 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.200023 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.200042 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.200069 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.200091 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.303300 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.303345 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.303357 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.303377 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.303390 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.406989 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.407087 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.407106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.407131 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.407149 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.510062 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.510178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.510204 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.510244 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.510268 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.615645 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.615726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.615750 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.615782 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.615806 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.719513 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.719600 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.719616 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.719636 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.719651 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.822455 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.822511 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.822525 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.822558 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.822570 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.926098 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.926171 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.926183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.926206 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:21 crc kubenswrapper[4814]: I0216 09:47:21.926222 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:21Z","lastTransitionTime":"2026-02-16T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.018196 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:34:46.533433465 +0000 UTC Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.029249 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.029277 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.029287 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.029302 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.029315 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:22Z","lastTransitionTime":"2026-02-16T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.131638 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.131861 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.131871 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.131885 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.131898 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:22Z","lastTransitionTime":"2026-02-16T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.235772 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.235840 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.235855 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.235878 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.235894 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:22Z","lastTransitionTime":"2026-02-16T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.339593 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.339688 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.339711 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.339745 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.339764 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:22Z","lastTransitionTime":"2026-02-16T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.442850 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.442951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.442986 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.443016 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.443036 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:22Z","lastTransitionTime":"2026-02-16T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.545811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.545870 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.545883 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.545905 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.545921 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:22Z","lastTransitionTime":"2026-02-16T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.649356 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.649440 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.649460 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.649491 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.649514 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:22Z","lastTransitionTime":"2026-02-16T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.752390 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.752487 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.752512 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.752586 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.752613 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:22Z","lastTransitionTime":"2026-02-16T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.855584 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.855648 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.855666 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.855692 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.855709 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:22Z","lastTransitionTime":"2026-02-16T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.958589 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.958634 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.958646 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.958661 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.958670 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:22Z","lastTransitionTime":"2026-02-16T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.992685 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.993061 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:22 crc kubenswrapper[4814]: E0216 09:47:22.993149 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.993181 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:22 crc kubenswrapper[4814]: I0216 09:47:22.993180 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:22 crc kubenswrapper[4814]: E0216 09:47:22.993339 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:22 crc kubenswrapper[4814]: E0216 09:47:22.993371 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:22 crc kubenswrapper[4814]: E0216 09:47:22.993780 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.012165 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.018594 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:57:40.855676246 +0000 UTC Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.028043 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.050804 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"53ed6503-5c40-4a82-985c-dc46bc5daaed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:47:02Z\\\",\\\"message\\\":\\\"obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-node-ghlbk after 0 failed attempt(s)\\\\nI0216 09:47:02.083930 6873 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-ghlbk\\\\nI0216 09:47:02.083833 6873 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 09:47:02.084081 6873 ovnkube.go:599] Stopped ovnkube\\\\nI0216 09:47:02.084139 6873 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0216 09:47:02.084179 6873 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 09:47:02.084255 6873 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mtjxs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-ghlbk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.061822 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.061889 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.061902 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.061936 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.061950 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:23Z","lastTransitionTime":"2026-02-16T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.064974 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83343376-433f-46da-b90f-9e1dd9270ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bghwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9dlr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.080948 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://186c97334a1a098bdc73d835889dc9e34b00760174dc2880bf85acaa1b8a4a2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.094274 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-rb5nq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29aff2bc-2aaa-4c9b-9d49-3d12395ec125\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5cd7b05aca4ed29837f52294c2a15f994ac07e70dfe207886f8eda7120ed2eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdr7c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:06Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-rb5nq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.115811 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07630988-80c2-4370-8944-6f7427a527a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc8c389f64a7f977580f3c589705c7adff8784ecee890081b52c345392c53088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2d02ee4a9b16ce7a1ee9efe79a433f11bb9ccd2c81a4c22dfecc9f5b671899d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d546e894b98291a31cbe98e523efae257f086ab218ae6c4484c6b6532936d7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.133827 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"846a6f96-7843-4093-bef9-35a6ed568122\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfbc3db34db15eb47c8ccbc8da51e3b45221e8ed9d88e289b4c578bcad6397f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://648d86591205d99bbe1feac59920850eb34beee1791f41c7a392dd1293c8fa15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2cf2439545b4f5eccc175ad32e88a01af27070d1543b309481a69cfbe293a23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1eee833882d9b046e5a34038f2699a733b44ac35f2ea7a489da5e0f4429af44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.146785 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:23 crc kubenswrapper[4814]: E0216 09:47:23.146960 4814 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:47:23 crc kubenswrapper[4814]: E0216 09:47:23.147046 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs podName:83343376-433f-46da-b90f-9e1dd9270ea4 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:27.147026693 +0000 UTC m=+164.840182873 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs") pod "network-metrics-daemon-l9dlr" (UID: "83343376-433f-46da-b90f-9e1dd9270ea4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.148510 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.160928 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tq9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5f70113-f984-41a9-abda-7b1e787395d8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6147d6d77f746141afcb6ac77018236eb8d0d8898376b720aabb873afd075448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tq9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.165236 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.165270 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.165282 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.165300 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.165312 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:23Z","lastTransitionTime":"2026-02-16T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.172838 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edbbce62-bfbb-46b0-b48b-8dcc485ccced\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7003af8bdba8161a363e9f39525d354e17f4402fd151424cb692ee75cc0d2294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a300c34c86d19e4a64486795f82443a43329405d90323809e68b9859d31ed3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.186824 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2253174d-f4ae-4b6a-bfdb-10b821ba8fbe\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:45:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T09:46:03Z\\\",\\\"message\\\":\\\"le observer\\\\nW0216 09:46:02.727363 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 09:46:02.727518 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 09:46:02.728449 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2749210841/tls.crt::/tmp/serving-cert-2749210841/tls.key\\\\\\\"\\\\nI0216 09:46:03.052141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 09:46:03.059517 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 09:46:03.059567 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 09:46:03.059591 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 09:46:03.059608 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 09:46:03.072195 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 09:46:03.072224 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072229 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 09:46:03.072234 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 09:46:03.072237 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 09:46:03.072240 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 09:46:03.072243 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 09:46:03.072517 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 09:46:03.075107 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:45:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:45:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:45:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:45:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.205322 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01f00ee8ae880e9d5d192bac0dc25704eca46fa74b0e778a34262d066a047520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8756c7e3c36881139093e9ab37444be8213e986c02f16204c2066b3551586d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.220288 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7eb0082ec46891db759878f14617a555783282d1d69107e39eaa252dff01c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.241292 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwtrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T09:46:54Z\\\",\\\"message\\\":\\\"2026-02-16T09:46:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429\\\\n2026-02-16T09:46:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_170d9ad4-8650-4d18-920b-ddde61dad429 to /host/opt/cni/bin/\\\\n2026-02-16T09:46:08Z [verbose] multus-daemon started\\\\n2026-02-16T09:46:08Z [verbose] Readiness Indicator file check\\\\n2026-02-16T09:46:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2lftp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwtrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.255328 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22f17e0b-afd9-459b-8451-f247a3c76a74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c16a99846c815519449597da1b72f6b3313b01b37dd2dc1b3513b4d7595af220\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xld8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wt4c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.267475 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.267549 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.267563 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.267582 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.267596 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:23Z","lastTransitionTime":"2026-02-16T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.277060 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a89b210e-c736-4ca5-be0a-0044be5e577b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a47552775267d989df4df691fac42ae013f08b0f37b57aa032f250fabf15b1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef4893299f7ba7eb32389a9ff036baa00910bbc0ee3bf07d0a4fc2be36f9ac88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://972fa8aa7e43f942a146049678f3451d674eb79b3a2d5c756ada703a36765ff5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6f662e36cde8128dcd97b5ed929f1673917963e1467b75c5a9841061bad8001\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b8d90a9729b156c34c673d397d74db2141ed066a4e929f43ffdb4196bab08c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://651801213f44f478aa13081acc1d85d7577dbfa1a9bc3530b08c49bba3fa2f8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56143bb2c6c5311d219d206682297f7ad6b7dfd70c69eb5fa307614219dabc8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T09:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T09:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rkfmt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb2xj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.289313 4814 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f0a84b8-4c95-425c-ba79-884d3bc65ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T09:46:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7068a7b4ba753d5152a319271e54b423d54d6516ad7a91e7d7878316903af9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0ff12fe8c760cd1e1df3e12e216ee59dfd66443846df4f20be4f306b47ab057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T09:46:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-knlpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T09:46:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6d992\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T09:47:23Z is after 2025-08-24T17:21:41Z" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.370918 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.370979 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.370989 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.371009 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.371020 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:23Z","lastTransitionTime":"2026-02-16T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.475154 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.475284 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.475308 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.475342 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.475364 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:23Z","lastTransitionTime":"2026-02-16T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.578007 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.578083 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.578096 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.578116 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.578127 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:23Z","lastTransitionTime":"2026-02-16T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.681289 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.681620 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.681634 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.681657 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.681670 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:23Z","lastTransitionTime":"2026-02-16T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.784855 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.784941 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.784978 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.785013 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.785040 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:23Z","lastTransitionTime":"2026-02-16T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.888447 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.888528 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.888590 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.888616 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.888634 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:23Z","lastTransitionTime":"2026-02-16T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.991978 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.992030 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.992044 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.992064 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:23 crc kubenswrapper[4814]: I0216 09:47:23.992078 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:23Z","lastTransitionTime":"2026-02-16T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.018856 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:40:52.136507347 +0000 UTC Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.095475 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.095639 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.095716 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.095752 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.095775 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:24Z","lastTransitionTime":"2026-02-16T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.199974 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.200038 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.200050 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.200067 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.200108 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:24Z","lastTransitionTime":"2026-02-16T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.303595 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.303648 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.303662 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.303682 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.303694 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:24Z","lastTransitionTime":"2026-02-16T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.406700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.406743 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.406752 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.406770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.406784 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:24Z","lastTransitionTime":"2026-02-16T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.510271 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.510331 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.510341 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.510378 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.510395 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:24Z","lastTransitionTime":"2026-02-16T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.613649 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.613713 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.613726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.613742 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.613753 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:24Z","lastTransitionTime":"2026-02-16T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.717962 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.718014 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.718029 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.718057 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.718076 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:24Z","lastTransitionTime":"2026-02-16T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.821152 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.821210 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.821227 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.821251 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.821272 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:24Z","lastTransitionTime":"2026-02-16T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.924988 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.925038 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.925051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.925070 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.925083 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:24Z","lastTransitionTime":"2026-02-16T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.993915 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.993923 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.994022 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:24 crc kubenswrapper[4814]: I0216 09:47:24.994165 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:24 crc kubenswrapper[4814]: E0216 09:47:24.994344 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:24 crc kubenswrapper[4814]: E0216 09:47:24.994779 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:24 crc kubenswrapper[4814]: E0216 09:47:24.995041 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:24 crc kubenswrapper[4814]: E0216 09:47:24.995136 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.019692 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:54:57.042540844 +0000 UTC Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.028665 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.028747 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.028762 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.028784 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.028798 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:25Z","lastTransitionTime":"2026-02-16T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.131678 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.131729 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.131738 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.131756 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.131768 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:25Z","lastTransitionTime":"2026-02-16T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.235510 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.235604 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.235617 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.235636 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.235650 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:25Z","lastTransitionTime":"2026-02-16T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.339178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.339239 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.339251 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.339273 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.339287 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:25Z","lastTransitionTime":"2026-02-16T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.442712 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.442817 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.442831 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.442852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.442863 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:25Z","lastTransitionTime":"2026-02-16T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.546408 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.546482 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.546496 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.546518 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.546585 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:25Z","lastTransitionTime":"2026-02-16T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.648937 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.648980 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.648993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.649009 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.649022 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:25Z","lastTransitionTime":"2026-02-16T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.751492 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.751637 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.751656 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.751683 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.751702 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:25Z","lastTransitionTime":"2026-02-16T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.855471 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.855579 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.855599 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.855627 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.855647 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:25Z","lastTransitionTime":"2026-02-16T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.958804 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.958856 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.958869 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.958891 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:25 crc kubenswrapper[4814]: I0216 09:47:25.958905 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:25Z","lastTransitionTime":"2026-02-16T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.020136 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 02:52:04.517967047 +0000 UTC Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.062911 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.062969 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.062981 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.063002 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.063015 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:26Z","lastTransitionTime":"2026-02-16T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.166049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.166114 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.166127 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.166150 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.166164 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:26Z","lastTransitionTime":"2026-02-16T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.269991 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.270055 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.270073 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.270094 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.270107 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:26Z","lastTransitionTime":"2026-02-16T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.374064 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.374125 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.374137 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.374157 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.374169 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:26Z","lastTransitionTime":"2026-02-16T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.477247 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.477330 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.477349 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.477381 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.477402 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:26Z","lastTransitionTime":"2026-02-16T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.581159 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.581245 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.581312 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.581335 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.581376 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:26Z","lastTransitionTime":"2026-02-16T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.686711 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.686781 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.686798 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.686829 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.686847 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:26Z","lastTransitionTime":"2026-02-16T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.790177 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.790248 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.790266 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.790292 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.790310 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:26Z","lastTransitionTime":"2026-02-16T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.892920 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.893033 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.893085 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.893115 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.893133 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:26Z","lastTransitionTime":"2026-02-16T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.993219 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.993266 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.993219 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:26 crc kubenswrapper[4814]: E0216 09:47:26.993367 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:26 crc kubenswrapper[4814]: E0216 09:47:26.993573 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.993604 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:26 crc kubenswrapper[4814]: E0216 09:47:26.994242 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:26 crc kubenswrapper[4814]: E0216 09:47:26.994350 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.994595 4814 scope.go:117] "RemoveContainer" containerID="7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2" Feb 16 09:47:26 crc kubenswrapper[4814]: E0216 09:47:26.994885 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.995498 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.995527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.995549 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.995561 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:26 crc kubenswrapper[4814]: I0216 09:47:26.995571 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:26Z","lastTransitionTime":"2026-02-16T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.010277 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.020732 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:49:19.650578183 +0000 UTC Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.099348 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.099408 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.099424 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.099448 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.099464 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:27Z","lastTransitionTime":"2026-02-16T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.203741 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.203830 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.203848 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.203878 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.203895 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:27Z","lastTransitionTime":"2026-02-16T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.305776 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.305814 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.305823 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.305840 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.305849 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:27Z","lastTransitionTime":"2026-02-16T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.408931 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.408985 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.408998 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.409022 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.409035 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:27Z","lastTransitionTime":"2026-02-16T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.511993 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.512102 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.512114 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.512147 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.512159 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:27Z","lastTransitionTime":"2026-02-16T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.615502 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.615583 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.615598 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.615618 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.615631 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:27Z","lastTransitionTime":"2026-02-16T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.718967 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.719021 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.719032 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.719049 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.719060 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:27Z","lastTransitionTime":"2026-02-16T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.822639 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.822726 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.822740 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.822770 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.822787 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:27Z","lastTransitionTime":"2026-02-16T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.925459 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.925505 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.925521 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.925580 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:27 crc kubenswrapper[4814]: I0216 09:47:27.925602 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:27Z","lastTransitionTime":"2026-02-16T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.021882 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:30:48.875008693 +0000 UTC Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.028208 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.028264 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.028273 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.028293 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.028307 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:28Z","lastTransitionTime":"2026-02-16T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.131799 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.131888 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.131907 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.131938 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.131961 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:28Z","lastTransitionTime":"2026-02-16T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.235164 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.235197 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.235207 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.235223 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.235236 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:28Z","lastTransitionTime":"2026-02-16T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.339221 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.339281 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.339294 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.339320 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.339336 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:28Z","lastTransitionTime":"2026-02-16T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.441800 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.441890 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.441916 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.441949 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.441969 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:28Z","lastTransitionTime":"2026-02-16T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.545257 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.545302 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.545335 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.545355 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.545368 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:28Z","lastTransitionTime":"2026-02-16T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.648677 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.648722 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.648732 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.648750 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.648761 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:28Z","lastTransitionTime":"2026-02-16T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.751634 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.751670 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.751683 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.751700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.751713 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:28Z","lastTransitionTime":"2026-02-16T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.854809 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.854842 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.854852 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.854870 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.854880 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:28Z","lastTransitionTime":"2026-02-16T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.957750 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.957824 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.957834 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.957851 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.957860 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:28Z","lastTransitionTime":"2026-02-16T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.993172 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.993207 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.993262 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:28 crc kubenswrapper[4814]: E0216 09:47:28.993311 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:28 crc kubenswrapper[4814]: I0216 09:47:28.993370 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:28 crc kubenswrapper[4814]: E0216 09:47:28.993515 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:28 crc kubenswrapper[4814]: E0216 09:47:28.993740 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:28 crc kubenswrapper[4814]: E0216 09:47:28.993764 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.022947 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:33:33.934047936 +0000 UTC Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.062678 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.062753 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.062767 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.062809 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.062823 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:29Z","lastTransitionTime":"2026-02-16T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.166183 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.166225 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.166250 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.166267 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.166277 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:29Z","lastTransitionTime":"2026-02-16T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.269364 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.269417 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.269429 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.269447 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.269460 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:29Z","lastTransitionTime":"2026-02-16T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.371954 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.372011 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.372026 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.372051 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.372066 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:29Z","lastTransitionTime":"2026-02-16T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.476143 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.476212 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.476232 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.476261 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.476281 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:29Z","lastTransitionTime":"2026-02-16T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.579781 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.579850 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.579868 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.579896 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.579913 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:29Z","lastTransitionTime":"2026-02-16T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.682790 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.682872 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.682892 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.682921 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.682940 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:29Z","lastTransitionTime":"2026-02-16T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.785751 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.785811 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.785832 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.785859 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.785920 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:29Z","lastTransitionTime":"2026-02-16T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.889218 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.889274 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.889287 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.889306 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.889323 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:29Z","lastTransitionTime":"2026-02-16T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.992079 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.992178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.992207 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.992245 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:29 crc kubenswrapper[4814]: I0216 09:47:29.992270 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:29Z","lastTransitionTime":"2026-02-16T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.023829 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:55:24.167580402 +0000 UTC Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.095638 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.095690 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.095702 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.095725 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.095744 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:30Z","lastTransitionTime":"2026-02-16T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.199099 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.199158 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.199170 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.199196 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.199210 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:30Z","lastTransitionTime":"2026-02-16T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.302850 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.302912 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.302930 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.302957 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.302982 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:30Z","lastTransitionTime":"2026-02-16T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.406424 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.406476 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.406486 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.406503 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.406517 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:30Z","lastTransitionTime":"2026-02-16T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.509881 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.509923 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.509933 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.509951 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.509962 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:30Z","lastTransitionTime":"2026-02-16T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.613452 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.613597 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.613700 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.613734 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.613762 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:30Z","lastTransitionTime":"2026-02-16T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.716949 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.717003 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.717015 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.717035 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.717048 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:30Z","lastTransitionTime":"2026-02-16T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.819257 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.819300 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.819310 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.819327 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.819341 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:30Z","lastTransitionTime":"2026-02-16T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.922417 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.922485 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.922507 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.922572 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.922597 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:30Z","lastTransitionTime":"2026-02-16T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.993420 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.993461 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.993559 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:30 crc kubenswrapper[4814]: I0216 09:47:30.993459 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:30 crc kubenswrapper[4814]: E0216 09:47:30.993720 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:30 crc kubenswrapper[4814]: E0216 09:47:30.993989 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:30 crc kubenswrapper[4814]: E0216 09:47:30.994099 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:30 crc kubenswrapper[4814]: E0216 09:47:30.994253 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.024067 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:49:45.000967371 +0000 UTC Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.026071 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.026178 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.026199 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.026229 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.026251 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:31Z","lastTransitionTime":"2026-02-16T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.130394 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.130445 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.130462 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.130479 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.130494 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:31Z","lastTransitionTime":"2026-02-16T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.232990 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.233055 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.233074 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.233100 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.233123 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:31Z","lastTransitionTime":"2026-02-16T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.332054 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.332106 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.332116 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.332136 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.332145 4814 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T09:47:31Z","lastTransitionTime":"2026-02-16T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.786437 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq"] Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.786968 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.789326 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.789400 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.789708 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.789772 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.850525 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/295975e4-e0ce-4be9-ba84-3818fdccb836-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.850689 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/295975e4-e0ce-4be9-ba84-3818fdccb836-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.850840 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/295975e4-e0ce-4be9-ba84-3818fdccb836-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.850898 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/295975e4-e0ce-4be9-ba84-3818fdccb836-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.850943 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/295975e4-e0ce-4be9-ba84-3818fdccb836-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.893719 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=4.893700058 podStartE2EDuration="4.893700058s" podCreationTimestamp="2026-02-16 09:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:31.893440741 +0000 UTC m=+109.586596931" watchObservedRunningTime="2026-02-16 09:47:31.893700058 +0000 UTC m=+109.586856238" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.911057 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.911027492 podStartE2EDuration="1m28.911027492s" podCreationTimestamp="2026-02-16 09:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:31.91058541 +0000 UTC m=+109.603741610" watchObservedRunningTime="2026-02-16 09:47:31.911027492 +0000 UTC m=+109.604183672" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.925797 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=61.925769747 podStartE2EDuration="1m1.925769747s" podCreationTimestamp="2026-02-16 09:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:31.925306585 +0000 UTC m=+109.618462765" watchObservedRunningTime="2026-02-16 09:47:31.925769747 +0000 UTC m=+109.618925947" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.952355 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/295975e4-e0ce-4be9-ba84-3818fdccb836-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.952401 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/295975e4-e0ce-4be9-ba84-3818fdccb836-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.952427 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/295975e4-e0ce-4be9-ba84-3818fdccb836-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.952504 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/295975e4-e0ce-4be9-ba84-3818fdccb836-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.952549 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/295975e4-e0ce-4be9-ba84-3818fdccb836-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.952615 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/295975e4-e0ce-4be9-ba84-3818fdccb836-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.952615 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/295975e4-e0ce-4be9-ba84-3818fdccb836-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.953380 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/295975e4-e0ce-4be9-ba84-3818fdccb836-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.963845 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/295975e4-e0ce-4be9-ba84-3818fdccb836-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.969868 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/295975e4-e0ce-4be9-ba84-3818fdccb836-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sr4fq\" (UID: \"295975e4-e0ce-4be9-ba84-3818fdccb836\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.980767 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-rb5nq" podStartSLOduration=87.980746271 podStartE2EDuration="1m27.980746271s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:31.968470192 +0000 UTC m=+109.661626392" watchObservedRunningTime="2026-02-16 09:47:31.980746271 +0000 UTC m=+109.673902461" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.995872 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=33.995853746 podStartE2EDuration="33.995853746s" podCreationTimestamp="2026-02-16 09:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:31.981186293 +0000 UTC m=+109.674342483" watchObservedRunningTime="2026-02-16 09:47:31.995853746 +0000 UTC m=+109.689009926" Feb 16 09:47:31 crc kubenswrapper[4814]: I0216 09:47:31.996298 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.996294077 podStartE2EDuration="1m28.996294077s" podCreationTimestamp="2026-02-16 09:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:31.996249236 +0000 UTC m=+109.689405416" watchObservedRunningTime="2026-02-16 09:47:31.996294077 +0000 UTC m=+109.689450257" Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.026825 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:35:33.670828558 +0000 UTC Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.026913 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.035517 4814 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.041380 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tq9bc" podStartSLOduration=88.041348876 podStartE2EDuration="1m28.041348876s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:32.041320095 +0000 UTC m=+109.734476275" watchObservedRunningTime="2026-02-16 09:47:32.041348876 +0000 UTC m=+109.734505056" Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.095826 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gwtrg" podStartSLOduration=88.095803925 podStartE2EDuration="1m28.095803925s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:32.095474057 +0000 UTC m=+109.788630237" watchObservedRunningTime="2026-02-16 09:47:32.095803925 +0000 UTC m=+109.788960105" Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.103112 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.108378 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podStartSLOduration=88.108346641 podStartE2EDuration="1m28.108346641s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:32.107527569 +0000 UTC m=+109.800683749" watchObservedRunningTime="2026-02-16 09:47:32.108346641 +0000 UTC m=+109.801502831" Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.130924 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kb2xj" podStartSLOduration=88.130898717 podStartE2EDuration="1m28.130898717s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:32.128759789 +0000 UTC m=+109.821915969" watchObservedRunningTime="2026-02-16 09:47:32.130898717 +0000 UTC m=+109.824054897" Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.143818 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6d992" podStartSLOduration=87.143806232 podStartE2EDuration="1m27.143806232s" podCreationTimestamp="2026-02-16 09:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:32.142737334 +0000 UTC m=+109.835893534" watchObservedRunningTime="2026-02-16 09:47:32.143806232 +0000 UTC m=+109.836962402" Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.688955 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" event={"ID":"295975e4-e0ce-4be9-ba84-3818fdccb836","Type":"ContainerStarted","Data":"6cec7fef65dbe5ebb961cc9feb68b951950d82f0009491c98327f6ff8a93f5de"} Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.689030 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" event={"ID":"295975e4-e0ce-4be9-ba84-3818fdccb836","Type":"ContainerStarted","Data":"03a508a51b4ba24422a2c3170471601452337b8dd6f8eca5db36ccba12e4ef62"} Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.993209 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.993237 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:32 crc kubenswrapper[4814]: E0216 09:47:32.996344 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.996395 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:32 crc kubenswrapper[4814]: E0216 09:47:32.997025 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:32 crc kubenswrapper[4814]: E0216 09:47:32.996491 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:32 crc kubenswrapper[4814]: I0216 09:47:32.996407 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:32 crc kubenswrapper[4814]: E0216 09:47:32.997758 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:34 crc kubenswrapper[4814]: I0216 09:47:34.993147 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:34 crc kubenswrapper[4814]: I0216 09:47:34.993213 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:34 crc kubenswrapper[4814]: I0216 09:47:34.993218 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:34 crc kubenswrapper[4814]: I0216 09:47:34.993238 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:34 crc kubenswrapper[4814]: E0216 09:47:34.993336 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:34 crc kubenswrapper[4814]: E0216 09:47:34.993449 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:34 crc kubenswrapper[4814]: E0216 09:47:34.993607 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:34 crc kubenswrapper[4814]: E0216 09:47:34.993677 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:36 crc kubenswrapper[4814]: I0216 09:47:36.993795 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:36 crc kubenswrapper[4814]: I0216 09:47:36.993813 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:36 crc kubenswrapper[4814]: E0216 09:47:36.994460 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:36 crc kubenswrapper[4814]: I0216 09:47:36.993874 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:36 crc kubenswrapper[4814]: I0216 09:47:36.993851 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:36 crc kubenswrapper[4814]: E0216 09:47:36.994730 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:36 crc kubenswrapper[4814]: E0216 09:47:36.994897 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:36 crc kubenswrapper[4814]: E0216 09:47:36.995036 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:38 crc kubenswrapper[4814]: I0216 09:47:38.992497 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:38 crc kubenswrapper[4814]: I0216 09:47:38.992613 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:38 crc kubenswrapper[4814]: I0216 09:47:38.992761 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:38 crc kubenswrapper[4814]: E0216 09:47:38.992759 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:38 crc kubenswrapper[4814]: I0216 09:47:38.992798 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:38 crc kubenswrapper[4814]: E0216 09:47:38.992957 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:38 crc kubenswrapper[4814]: E0216 09:47:38.993074 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:38 crc kubenswrapper[4814]: E0216 09:47:38.993204 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:40 crc kubenswrapper[4814]: I0216 09:47:40.727889 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwtrg_419c1fde-3a56-45c4-b6aa-5c5b8cde8db6/kube-multus/1.log" Feb 16 09:47:40 crc kubenswrapper[4814]: I0216 09:47:40.728802 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwtrg_419c1fde-3a56-45c4-b6aa-5c5b8cde8db6/kube-multus/0.log" Feb 16 09:47:40 crc kubenswrapper[4814]: I0216 09:47:40.728894 4814 generic.go:334] "Generic (PLEG): container finished" podID="419c1fde-3a56-45c4-b6aa-5c5b8cde8db6" containerID="cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630" exitCode=1 Feb 16 09:47:40 crc kubenswrapper[4814]: I0216 09:47:40.728952 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwtrg" event={"ID":"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6","Type":"ContainerDied","Data":"cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630"} Feb 16 09:47:40 crc kubenswrapper[4814]: I0216 09:47:40.729023 4814 scope.go:117] "RemoveContainer" containerID="e873e06a604364a26dbbd8ca46518d9b431c0411fdcbb6d17c0baf7b1da63919" Feb 16 09:47:40 crc kubenswrapper[4814]: I0216 09:47:40.729730 4814 scope.go:117] "RemoveContainer" containerID="cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630" Feb 16 09:47:40 crc kubenswrapper[4814]: E0216 09:47:40.730053 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gwtrg_openshift-multus(419c1fde-3a56-45c4-b6aa-5c5b8cde8db6)\"" pod="openshift-multus/multus-gwtrg" podUID="419c1fde-3a56-45c4-b6aa-5c5b8cde8db6" Feb 16 09:47:40 crc kubenswrapper[4814]: I0216 09:47:40.758877 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr4fq" podStartSLOduration=96.758838909 podStartE2EDuration="1m36.758838909s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:32.705077269 +0000 UTC m=+110.398233449" watchObservedRunningTime="2026-02-16 09:47:40.758838909 +0000 UTC m=+118.451995169" Feb 16 09:47:40 crc kubenswrapper[4814]: I0216 09:47:40.993140 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:40 crc kubenswrapper[4814]: E0216 09:47:40.993308 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:40 crc kubenswrapper[4814]: I0216 09:47:40.993579 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:40 crc kubenswrapper[4814]: E0216 09:47:40.993652 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:40 crc kubenswrapper[4814]: I0216 09:47:40.993838 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:40 crc kubenswrapper[4814]: I0216 09:47:40.993857 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:40 crc kubenswrapper[4814]: E0216 09:47:40.994031 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:40 crc kubenswrapper[4814]: E0216 09:47:40.994229 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:41 crc kubenswrapper[4814]: I0216 09:47:41.733381 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwtrg_419c1fde-3a56-45c4-b6aa-5c5b8cde8db6/kube-multus/1.log" Feb 16 09:47:41 crc kubenswrapper[4814]: I0216 09:47:41.993785 4814 scope.go:117] "RemoveContainer" containerID="7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2" Feb 16 09:47:41 crc kubenswrapper[4814]: E0216 09:47:41.994021 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-ghlbk_openshift-ovn-kubernetes(53ed6503-5c40-4a82-985c-dc46bc5daaed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" Feb 16 09:47:42 crc kubenswrapper[4814]: I0216 09:47:42.992813 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:42 crc kubenswrapper[4814]: E0216 09:47:42.992981 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:42 crc kubenswrapper[4814]: I0216 09:47:42.994758 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:42 crc kubenswrapper[4814]: E0216 09:47:42.995438 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:42 crc kubenswrapper[4814]: I0216 09:47:42.995632 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:42 crc kubenswrapper[4814]: I0216 09:47:42.995685 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:42 crc kubenswrapper[4814]: E0216 09:47:42.995748 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:42 crc kubenswrapper[4814]: E0216 09:47:42.995911 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:43 crc kubenswrapper[4814]: E0216 09:47:43.008242 4814 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 09:47:43 crc kubenswrapper[4814]: E0216 09:47:43.103323 4814 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 09:47:44 crc kubenswrapper[4814]: I0216 09:47:44.992667 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:44 crc kubenswrapper[4814]: I0216 09:47:44.992737 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:44 crc kubenswrapper[4814]: I0216 09:47:44.992831 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:44 crc kubenswrapper[4814]: E0216 09:47:44.993736 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:44 crc kubenswrapper[4814]: E0216 09:47:44.993506 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:44 crc kubenswrapper[4814]: I0216 09:47:44.993155 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:44 crc kubenswrapper[4814]: E0216 09:47:44.993840 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:44 crc kubenswrapper[4814]: E0216 09:47:44.993923 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:46 crc kubenswrapper[4814]: I0216 09:47:46.993113 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:46 crc kubenswrapper[4814]: I0216 09:47:46.993225 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:46 crc kubenswrapper[4814]: E0216 09:47:46.993276 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:46 crc kubenswrapper[4814]: I0216 09:47:46.993301 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:46 crc kubenswrapper[4814]: E0216 09:47:46.993357 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:46 crc kubenswrapper[4814]: E0216 09:47:46.993400 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:46 crc kubenswrapper[4814]: I0216 09:47:46.994114 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:46 crc kubenswrapper[4814]: E0216 09:47:46.994345 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:48 crc kubenswrapper[4814]: E0216 09:47:48.105025 4814 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 09:47:48 crc kubenswrapper[4814]: I0216 09:47:48.992942 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:48 crc kubenswrapper[4814]: I0216 09:47:48.993031 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:48 crc kubenswrapper[4814]: E0216 09:47:48.993160 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:48 crc kubenswrapper[4814]: I0216 09:47:48.993063 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:48 crc kubenswrapper[4814]: E0216 09:47:48.993274 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:48 crc kubenswrapper[4814]: E0216 09:47:48.993345 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:48 crc kubenswrapper[4814]: I0216 09:47:48.993713 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:48 crc kubenswrapper[4814]: E0216 09:47:48.993974 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:50 crc kubenswrapper[4814]: I0216 09:47:50.992603 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:50 crc kubenswrapper[4814]: I0216 09:47:50.992641 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:50 crc kubenswrapper[4814]: I0216 09:47:50.992718 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:50 crc kubenswrapper[4814]: I0216 09:47:50.992721 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:50 crc kubenswrapper[4814]: E0216 09:47:50.992806 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:50 crc kubenswrapper[4814]: E0216 09:47:50.992884 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:50 crc kubenswrapper[4814]: E0216 09:47:50.993015 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:50 crc kubenswrapper[4814]: E0216 09:47:50.993150 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:52 crc kubenswrapper[4814]: I0216 09:47:52.992847 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:52 crc kubenswrapper[4814]: I0216 09:47:52.992836 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:52 crc kubenswrapper[4814]: I0216 09:47:52.993770 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:52 crc kubenswrapper[4814]: I0216 09:47:52.993959 4814 scope.go:117] "RemoveContainer" containerID="cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630" Feb 16 09:47:52 crc kubenswrapper[4814]: I0216 09:47:52.994057 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:52 crc kubenswrapper[4814]: E0216 09:47:52.994198 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:52 crc kubenswrapper[4814]: E0216 09:47:52.994259 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:52 crc kubenswrapper[4814]: E0216 09:47:52.994391 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:52 crc kubenswrapper[4814]: E0216 09:47:52.994466 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:53 crc kubenswrapper[4814]: E0216 09:47:53.105790 4814 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 09:47:53 crc kubenswrapper[4814]: I0216 09:47:53.775722 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwtrg_419c1fde-3a56-45c4-b6aa-5c5b8cde8db6/kube-multus/1.log" Feb 16 09:47:53 crc kubenswrapper[4814]: I0216 09:47:53.775782 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwtrg" event={"ID":"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6","Type":"ContainerStarted","Data":"ee393866ad3987bd8516a16241c7e7c3516784ac2be70efcdd49929dfcad36fd"} Feb 16 09:47:54 crc kubenswrapper[4814]: I0216 09:47:54.992647 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:54 crc kubenswrapper[4814]: I0216 09:47:54.992749 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:54 crc kubenswrapper[4814]: I0216 09:47:54.992816 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:54 crc kubenswrapper[4814]: E0216 09:47:54.992942 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:54 crc kubenswrapper[4814]: E0216 09:47:54.993069 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:54 crc kubenswrapper[4814]: E0216 09:47:54.993265 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:54 crc kubenswrapper[4814]: I0216 09:47:54.993878 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:54 crc kubenswrapper[4814]: E0216 09:47:54.994151 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:56 crc kubenswrapper[4814]: I0216 09:47:56.993444 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:56 crc kubenswrapper[4814]: I0216 09:47:56.993580 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:56 crc kubenswrapper[4814]: E0216 09:47:56.993622 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:47:56 crc kubenswrapper[4814]: E0216 09:47:56.993755 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:56 crc kubenswrapper[4814]: I0216 09:47:56.993826 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:56 crc kubenswrapper[4814]: E0216 09:47:56.993884 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:56 crc kubenswrapper[4814]: I0216 09:47:56.993929 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:56 crc kubenswrapper[4814]: E0216 09:47:56.993982 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:56 crc kubenswrapper[4814]: I0216 09:47:56.994827 4814 scope.go:117] "RemoveContainer" containerID="7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2" Feb 16 09:47:57 crc kubenswrapper[4814]: I0216 09:47:57.792266 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/3.log" Feb 16 09:47:57 crc kubenswrapper[4814]: I0216 09:47:57.796279 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerStarted","Data":"652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e"} Feb 16 09:47:57 crc kubenswrapper[4814]: I0216 09:47:57.796887 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:47:57 crc kubenswrapper[4814]: I0216 09:47:57.875493 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podStartSLOduration=113.875464864 podStartE2EDuration="1m53.875464864s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:47:57.828787944 +0000 UTC m=+135.521944144" watchObservedRunningTime="2026-02-16 09:47:57.875464864 +0000 UTC m=+135.568621054" Feb 16 09:47:57 crc kubenswrapper[4814]: I0216 09:47:57.876600 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l9dlr"] Feb 16 09:47:57 crc kubenswrapper[4814]: I0216 09:47:57.876742 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:57 crc kubenswrapper[4814]: E0216 09:47:57.876880 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:58 crc kubenswrapper[4814]: E0216 09:47:58.108528 4814 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 09:47:58 crc kubenswrapper[4814]: I0216 09:47:58.992969 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:47:58 crc kubenswrapper[4814]: I0216 09:47:58.993052 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:47:58 crc kubenswrapper[4814]: I0216 09:47:58.993005 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:47:58 crc kubenswrapper[4814]: E0216 09:47:58.993247 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:47:58 crc kubenswrapper[4814]: I0216 09:47:58.993306 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:47:58 crc kubenswrapper[4814]: E0216 09:47:58.993427 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:47:58 crc kubenswrapper[4814]: E0216 09:47:58.993501 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:47:58 crc kubenswrapper[4814]: E0216 09:47:58.993561 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:48:00 crc kubenswrapper[4814]: I0216 09:48:00.993232 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:48:00 crc kubenswrapper[4814]: E0216 09:48:00.993445 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:48:00 crc kubenswrapper[4814]: I0216 09:48:00.993854 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:48:00 crc kubenswrapper[4814]: I0216 09:48:00.994032 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:48:00 crc kubenswrapper[4814]: I0216 09:48:00.994162 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:48:00 crc kubenswrapper[4814]: E0216 09:48:00.994039 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:48:00 crc kubenswrapper[4814]: E0216 09:48:00.994250 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:48:00 crc kubenswrapper[4814]: E0216 09:48:00.994398 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:48:02 crc kubenswrapper[4814]: I0216 09:48:02.993042 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:48:02 crc kubenswrapper[4814]: I0216 09:48:02.993143 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:48:02 crc kubenswrapper[4814]: I0216 09:48:02.993213 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:48:02 crc kubenswrapper[4814]: E0216 09:48:02.995442 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 09:48:02 crc kubenswrapper[4814]: I0216 09:48:02.995494 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:48:02 crc kubenswrapper[4814]: E0216 09:48:02.995734 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 09:48:02 crc kubenswrapper[4814]: E0216 09:48:02.995773 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 09:48:02 crc kubenswrapper[4814]: E0216 09:48:02.995836 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9dlr" podUID="83343376-433f-46da-b90f-9e1dd9270ea4" Feb 16 09:48:04 crc kubenswrapper[4814]: I0216 09:48:04.993557 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:48:04 crc kubenswrapper[4814]: I0216 09:48:04.993671 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:48:04 crc kubenswrapper[4814]: I0216 09:48:04.993686 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:48:04 crc kubenswrapper[4814]: I0216 09:48:04.993687 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:48:04 crc kubenswrapper[4814]: I0216 09:48:04.997353 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 09:48:04 crc kubenswrapper[4814]: I0216 09:48:04.997612 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 09:48:04 crc kubenswrapper[4814]: I0216 09:48:04.997969 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 09:48:04 crc kubenswrapper[4814]: I0216 09:48:04.998423 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 09:48:04 crc kubenswrapper[4814]: I0216 09:48:04.998695 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 09:48:04 crc kubenswrapper[4814]: I0216 09:48:04.999918 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 09:48:07 crc kubenswrapper[4814]: I0216 09:48:07.960782 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:48:07 crc kubenswrapper[4814]: I0216 09:48:07.960900 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:48:10 crc kubenswrapper[4814]: I0216 09:48:10.831718 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:10 crc kubenswrapper[4814]: E0216 09:48:10.832045 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:50:12.831998642 +0000 UTC m=+270.525154872 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:10 crc kubenswrapper[4814]: I0216 09:48:10.832612 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:48:10 crc kubenswrapper[4814]: I0216 09:48:10.832688 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:48:10 crc kubenswrapper[4814]: I0216 09:48:10.834021 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:48:10 crc kubenswrapper[4814]: I0216 09:48:10.844314 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:48:10 crc kubenswrapper[4814]: I0216 09:48:10.933953 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:48:10 crc kubenswrapper[4814]: I0216 09:48:10.934058 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:48:10 crc kubenswrapper[4814]: I0216 09:48:10.938729 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:48:10 crc kubenswrapper[4814]: I0216 09:48:10.939675 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:48:11 crc kubenswrapper[4814]: I0216 09:48:11.023449 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 09:48:11 crc kubenswrapper[4814]: I0216 09:48:11.038689 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:48:11 crc kubenswrapper[4814]: I0216 09:48:11.073465 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 09:48:11 crc kubenswrapper[4814]: W0216 09:48:11.545939 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-cbd64581312e57b6bb2b84fd75beeb912e87c09250a2b5cc02b2ed4e48223749 WatchSource:0}: Error finding container cbd64581312e57b6bb2b84fd75beeb912e87c09250a2b5cc02b2ed4e48223749: Status 404 returned error can't find the container with id cbd64581312e57b6bb2b84fd75beeb912e87c09250a2b5cc02b2ed4e48223749 Feb 16 09:48:11 crc kubenswrapper[4814]: W0216 09:48:11.553997 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-3998ab4b46718d415f7a870d9071cf06d98cf7bed84b2f73ae8b3a9afb27cb2a WatchSource:0}: Error finding container 3998ab4b46718d415f7a870d9071cf06d98cf7bed84b2f73ae8b3a9afb27cb2a: Status 404 returned error can't find the container with id 3998ab4b46718d415f7a870d9071cf06d98cf7bed84b2f73ae8b3a9afb27cb2a Feb 16 09:48:11 crc kubenswrapper[4814]: I0216 09:48:11.861284 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"65ccbbdf6d5d4a0cdc0935e7d9e571d310ac1308db00c1901afe3e8f459e24ea"} Feb 16 09:48:11 crc kubenswrapper[4814]: I0216 09:48:11.861366 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"cbd64581312e57b6bb2b84fd75beeb912e87c09250a2b5cc02b2ed4e48223749"} Feb 16 09:48:11 crc kubenswrapper[4814]: I0216 09:48:11.861621 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:48:11 crc kubenswrapper[4814]: I0216 09:48:11.863398 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5cb9dbfd876ed99192a3f1ddc1df77dcf3fb9d83d3804ad4c8075e1c1a62b9fd"} Feb 16 09:48:11 crc kubenswrapper[4814]: I0216 09:48:11.863454 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3998ab4b46718d415f7a870d9071cf06d98cf7bed84b2f73ae8b3a9afb27cb2a"} Feb 16 09:48:11 crc kubenswrapper[4814]: I0216 09:48:11.867237 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d0ec2b887ecc45f7d7241962bf4d991605e796c54a104d5dedb3e324c1c651df"} Feb 16 09:48:11 crc kubenswrapper[4814]: I0216 09:48:11.867334 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"195ef328677530b6d51a144d7e4ce442d313e8d0b8a34ffe1107aebc39a0a215"} Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.615527 4814 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.668449 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4p95d"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.669206 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.670648 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.677676 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.685614 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.685821 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.686026 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.686298 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.685635 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.687168 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.695630 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.695842 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.696012 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.696162 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.696402 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.696627 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.696733 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.696965 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.697155 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.697601 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.698261 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.699762 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jt6sp"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.700694 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.701262 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fsxcr"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.701864 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.702221 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.706067 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.707339 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.707798 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.707855 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.708448 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gfngr"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.709054 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.709380 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vv6v6"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.709595 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.710017 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.710986 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.711103 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.711156 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.711520 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.711714 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.711900 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.711913 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.712088 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.712262 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.712777 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.712993 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-j5fnw"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.713672 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-j5fnw" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.717317 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.717925 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4xwqr"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.718300 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.718642 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.719230 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4p95d"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.721117 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.724147 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.726780 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.726914 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tb5k2"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.727328 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.730075 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-swmkw"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.730427 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.731382 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.731624 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.731692 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.731870 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.731913 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.732016 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.732074 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.732164 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.732255 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.734071 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.735014 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.735380 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5kfsf"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.735777 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.736384 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.738665 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d6q92"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.739355 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.739464 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.740486 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.741189 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.743433 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.744034 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.744312 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.744513 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.744730 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.744891 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.745102 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.745239 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.745311 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.745447 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.745590 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.745711 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.750335 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.750619 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.750821 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.751010 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.751286 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.751425 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.751811 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.751996 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.752112 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.752200 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.752320 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.752724 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.752968 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.753207 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.758559 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.764485 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-node-pullsecrets\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.752020 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.764586 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-trusted-ca-bundle\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.764627 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d36256-4e8e-460d-ad98-eaaafbb76021-config\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.764662 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3d36256-4e8e-460d-ad98-eaaafbb76021-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.764703 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e498024a-b042-4d7c-9f47-4140b465bd63-serving-cert\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.764741 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.764776 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-client-ca\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.764814 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-dir\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765158 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765191 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765235 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-etcd-client\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765412 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-encryption-config\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765454 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4357f219-ec6a-4ada-863f-60ec8dbe0636-config\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765483 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-oauth-config\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765574 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b3d36256-4e8e-460d-ad98-eaaafbb76021-images\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765611 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/891ce392-5d04-4f40-bc6e-f0660568526e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765746 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-policies\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765787 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765823 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765872 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765938 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgtnz\" (UniqueName: \"kubernetes.io/projected/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-kube-api-access-vgtnz\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.765980 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/891ce392-5d04-4f40-bc6e-f0660568526e-audit-policies\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.766146 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379f8a26-453f-4cda-878a-8b3b04c3be54-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gss2f\" (UID: \"379f8a26-453f-4cda-878a-8b3b04c3be54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.766163 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.766190 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/891ce392-5d04-4f40-bc6e-f0660568526e-audit-dir\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.766345 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gshl8\" (UniqueName: \"kubernetes.io/projected/891ce392-5d04-4f40-bc6e-f0660568526e-kube-api-access-gshl8\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.766389 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bs8z\" (UniqueName: \"kubernetes.io/projected/13dde5e3-1577-420f-9b33-4d89a1a8749a-kube-api-access-2bs8z\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.766557 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.766826 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.767135 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.767651 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.768137 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.768477 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.766434 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-serving-cert\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772295 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns89n\" (UniqueName: \"kubernetes.io/projected/4357f219-ec6a-4ada-863f-60ec8dbe0636-kube-api-access-ns89n\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772342 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-config\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772378 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-service-ca\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772406 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4357f219-ec6a-4ada-863f-60ec8dbe0636-trusted-ca\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772445 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-machine-approver-tls\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772494 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghvm\" (UniqueName: \"kubernetes.io/projected/379f8a26-453f-4cda-878a-8b3b04c3be54-kube-api-access-2ghvm\") pod \"openshift-apiserver-operator-796bbdcf4f-gss2f\" (UID: \"379f8a26-453f-4cda-878a-8b3b04c3be54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772524 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772767 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/891ce392-5d04-4f40-bc6e-f0660568526e-serving-cert\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772797 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-audit-dir\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772833 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4357f219-ec6a-4ada-863f-60ec8dbe0636-serving-cert\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772864 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772896 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772929 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-auth-proxy-config\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.772964 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773006 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx26f\" (UniqueName: \"kubernetes.io/projected/e498024a-b042-4d7c-9f47-4140b465bd63-kube-api-access-xx26f\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773037 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773080 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-audit\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773153 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95h77\" (UniqueName: \"kubernetes.io/projected/b3d36256-4e8e-460d-ad98-eaaafbb76021-kube-api-access-95h77\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773219 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4zwp\" (UniqueName: \"kubernetes.io/projected/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-kube-api-access-v4zwp\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773251 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773289 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773321 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-config\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773345 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773352 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/379f8a26-453f-4cda-878a-8b3b04c3be54-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gss2f\" (UID: \"379f8a26-453f-4cda-878a-8b3b04c3be54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773389 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773430 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/891ce392-5d04-4f40-bc6e-f0660568526e-encryption-config\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773468 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-oauth-serving-cert\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773497 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773528 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-config\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773598 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773607 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773630 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/891ce392-5d04-4f40-bc6e-f0660568526e-etcd-client\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773660 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/891ce392-5d04-4f40-bc6e-f0660568526e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773693 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-etcd-serving-ca\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773746 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-serving-cert\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773781 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbtx6\" (UniqueName: \"kubernetes.io/projected/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-kube-api-access-cbtx6\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.773820 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-config\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.774076 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-image-import-ca\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.800320 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.801189 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.801624 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.801787 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.802149 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.804770 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.805696 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.805837 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.805932 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.806036 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.806330 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.808380 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jt6sp"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.808438 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.808948 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.810158 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.810478 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.810605 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.813197 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fsxcr"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.815374 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gfngr"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.816355 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.817094 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.824195 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.824523 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.826838 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.830694 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.832305 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.832747 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.833004 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.833468 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.833484 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.834244 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.836797 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.837359 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.838160 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.838614 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.841488 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.842502 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.843361 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.846225 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.846950 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-9kljz"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.859651 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.860432 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.860871 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.863352 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.865331 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.867756 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.872418 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.874052 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vv6v6"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.876332 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-image-import-ca\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.876483 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-config\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.876608 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-trusted-ca-bundle\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.876705 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-node-pullsecrets\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.876791 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d36256-4e8e-460d-ad98-eaaafbb76021-config\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.876869 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.876956 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3d36256-4e8e-460d-ad98-eaaafbb76021-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.877337 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e498024a-b042-4d7c-9f47-4140b465bd63-serving-cert\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.877461 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-dir\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.877758 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.877847 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-client-ca\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.877937 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-oauth-config\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.878399 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-etcd-client\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.877374 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-node-pullsecrets\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.878830 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-config\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.879583 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-encryption-config\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.879664 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4357f219-ec6a-4ada-863f-60ec8dbe0636-config\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.879781 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-image-import-ca\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.886004 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3d36256-4e8e-460d-ad98-eaaafbb76021-config\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.886075 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-dir\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.888519 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-client-ca\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.888632 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-trusted-ca-bundle\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.889225 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-encryption-config\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.889337 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.890002 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e498024a-b042-4d7c-9f47-4140b465bd63-serving-cert\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.891415 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-policies\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.891973 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.892293 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4357f219-ec6a-4ada-863f-60ec8dbe0636-config\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.879790 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-policies\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.893264 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.895727 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b3d36256-4e8e-460d-ad98-eaaafbb76021-images\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.895834 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/891ce392-5d04-4f40-bc6e-f0660568526e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.895937 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgtnz\" (UniqueName: \"kubernetes.io/projected/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-kube-api-access-vgtnz\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896035 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/891ce392-5d04-4f40-bc6e-f0660568526e-audit-policies\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896119 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bs8z\" (UniqueName: \"kubernetes.io/projected/13dde5e3-1577-420f-9b33-4d89a1a8749a-kube-api-access-2bs8z\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896202 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379f8a26-453f-4cda-878a-8b3b04c3be54-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gss2f\" (UID: \"379f8a26-453f-4cda-878a-8b3b04c3be54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896278 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/891ce392-5d04-4f40-bc6e-f0660568526e-audit-dir\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896357 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gshl8\" (UniqueName: \"kubernetes.io/projected/891ce392-5d04-4f40-bc6e-f0660568526e-kube-api-access-gshl8\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896467 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-serving-cert\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896589 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-config\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896700 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-service-ca\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896808 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns89n\" (UniqueName: \"kubernetes.io/projected/4357f219-ec6a-4ada-863f-60ec8dbe0636-kube-api-access-ns89n\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896909 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-machine-approver-tls\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896989 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4357f219-ec6a-4ada-863f-60ec8dbe0636-trusted-ca\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.897057 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-audit-dir\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.897189 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghvm\" (UniqueName: \"kubernetes.io/projected/379f8a26-453f-4cda-878a-8b3b04c3be54-kube-api-access-2ghvm\") pod \"openshift-apiserver-operator-796bbdcf4f-gss2f\" (UID: \"379f8a26-453f-4cda-878a-8b3b04c3be54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.897273 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.897342 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/891ce392-5d04-4f40-bc6e-f0660568526e-serving-cert\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.897574 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.897656 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.897739 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4357f219-ec6a-4ada-863f-60ec8dbe0636-serving-cert\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.897821 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-auth-proxy-config\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.897914 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.897990 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx26f\" (UniqueName: \"kubernetes.io/projected/e498024a-b042-4d7c-9f47-4140b465bd63-kube-api-access-xx26f\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.898274 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.898444 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-audit\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899010 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95h77\" (UniqueName: \"kubernetes.io/projected/b3d36256-4e8e-460d-ad98-eaaafbb76021-kube-api-access-95h77\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899106 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4zwp\" (UniqueName: \"kubernetes.io/projected/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-kube-api-access-v4zwp\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899189 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899274 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899343 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-config\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899409 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/891ce392-5d04-4f40-bc6e-f0660568526e-encryption-config\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899480 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/379f8a26-453f-4cda-878a-8b3b04c3be54-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gss2f\" (UID: \"379f8a26-453f-4cda-878a-8b3b04c3be54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899574 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899661 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-oauth-serving-cert\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899736 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899804 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-config\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.899947 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/891ce392-5d04-4f40-bc6e-f0660568526e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.900046 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-etcd-serving-ca\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.900272 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.900941 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/891ce392-5d04-4f40-bc6e-f0660568526e-etcd-client\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.902435 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-serving-cert\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.902568 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbtx6\" (UniqueName: \"kubernetes.io/projected/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-kube-api-access-cbtx6\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.903352 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/891ce392-5d04-4f40-bc6e-f0660568526e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.903459 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4357f219-ec6a-4ada-863f-60ec8dbe0636-trusted-ca\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.893758 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cr82j"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.904062 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-etcd-serving-ca\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.904844 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.893723 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-etcd-client\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.905749 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-config\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.906343 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-audit-dir\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.906498 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.906792 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-service-ca\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.906922 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-oauth-serving-cert\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.907816 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.909295 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.907980 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.910073 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-config\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.908816 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.908978 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-auth-proxy-config\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.910106 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/891ce392-5d04-4f40-bc6e-f0660568526e-audit-dir\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.900328 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/891ce392-5d04-4f40-bc6e-f0660568526e-audit-policies\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.910982 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.910999 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-audit\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.911156 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.907934 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.911562 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.911714 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.916440 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.896947 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/891ce392-5d04-4f40-bc6e-f0660568526e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.895991 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.918265 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4357f219-ec6a-4ada-863f-60ec8dbe0636-serving-cert\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.919211 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-serving-cert\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.919800 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.919883 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b3d36256-4e8e-460d-ad98-eaaafbb76021-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.920407 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379f8a26-453f-4cda-878a-8b3b04c3be54-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gss2f\" (UID: \"379f8a26-453f-4cda-878a-8b3b04c3be54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.923242 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.923406 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-machine-approver-tls\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.929942 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.929949 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b3d36256-4e8e-460d-ad98-eaaafbb76021-images\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.930317 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/891ce392-5d04-4f40-bc6e-f0660568526e-encryption-config\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.931620 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.931653 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.931665 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/891ce392-5d04-4f40-bc6e-f0660568526e-etcd-client\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.932290 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-oauth-config\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.932292 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-config\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.933158 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/379f8a26-453f-4cda-878a-8b3b04c3be54-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gss2f\" (UID: \"379f8a26-453f-4cda-878a-8b3b04c3be54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.933888 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/891ce392-5d04-4f40-bc6e-f0660568526e-serving-cert\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.934824 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-serving-cert\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.940459 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.941136 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.945970 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.946560 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.947341 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.947319 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.947888 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.948111 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.948548 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.949934 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kmk4b"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.950379 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kmk4b" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.950523 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.951835 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.952265 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.952371 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.953081 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.953474 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.954199 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-p27f4"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.954608 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.955921 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.956324 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.956869 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-27xhf"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.957815 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.961307 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.963175 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.964750 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.966725 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.970384 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-j5fnw"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.972317 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.974791 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4xwqr"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.976064 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-swmkw"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.977156 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.978287 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.979443 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.980730 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cr82j"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.981690 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.982804 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.984043 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.985037 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5kfsf"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.992968 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-b9r5t"] Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.995964 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 09:48:12 crc kubenswrapper[4814]: I0216 09:48:12.997283 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.012039 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.018610 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.018667 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.018683 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.018695 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.018706 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.018718 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.020265 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.022065 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.023623 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tb5k2"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.024770 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d6q92"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.025863 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.027228 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-b9r5t"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.028224 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-27xhf"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.029544 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-p27f4"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.030794 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8nms9"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.032730 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8nms9" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.032881 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7phc6"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.034352 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8nms9"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.034496 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.035405 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7phc6"] Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.071710 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.092480 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.112501 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.114831 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-tls\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.114872 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0e0223-e440-4b15-8183-41940ec62701-serving-cert\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.114913 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj8c9\" (UniqueName: \"kubernetes.io/projected/5d9feb14-2511-4e1e-a78a-e737ae28770c-kube-api-access-wj8c9\") pod \"downloads-7954f5f757-j5fnw\" (UID: \"5d9feb14-2511-4e1e-a78a-e737ae28770c\") " pod="openshift-console/downloads-7954f5f757-j5fnw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.114934 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-bound-sa-token\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.114953 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/09532720-4c09-46f9-9dc7-c3d201c74171-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8x9jr\" (UID: \"09532720-4c09-46f9-9dc7-c3d201c74171\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.114992 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-trusted-ca\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115062 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-client-ca\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115294 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a02ac473-c7bb-4702-ac42-f0e973d03f05-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115389 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzfkg\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-kube-api-access-wzfkg\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115426 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a97a7dc8-4167-4e3d-b315-473d5adcfe1b-serving-cert\") pod \"openshift-config-operator-7777fb866f-6mqrk\" (UID: \"a97a7dc8-4167-4e3d-b315-473d5adcfe1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115464 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb8ht\" (UniqueName: \"kubernetes.io/projected/9c0e0223-e440-4b15-8183-41940ec62701-kube-api-access-fb8ht\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115494 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-certificates\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115546 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-config\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115644 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a02ac473-c7bb-4702-ac42-f0e973d03f05-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115691 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a97a7dc8-4167-4e3d-b315-473d5adcfe1b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6mqrk\" (UID: \"a97a7dc8-4167-4e3d-b315-473d5adcfe1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115775 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115844 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv6bp\" (UniqueName: \"kubernetes.io/projected/09532720-4c09-46f9-9dc7-c3d201c74171-kube-api-access-cv6bp\") pod \"cluster-samples-operator-665b6dd947-8x9jr\" (UID: \"09532720-4c09-46f9-9dc7-c3d201c74171\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.115984 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx9dk\" (UniqueName: \"kubernetes.io/projected/a97a7dc8-4167-4e3d-b315-473d5adcfe1b-kube-api-access-gx9dk\") pod \"openshift-config-operator-7777fb866f-6mqrk\" (UID: \"a97a7dc8-4167-4e3d-b315-473d5adcfe1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.116354 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:13.616333724 +0000 UTC m=+151.309490114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.132275 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.151953 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.172011 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.193711 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.212086 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.216880 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.217112 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:13.717070781 +0000 UTC m=+151.410226971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.217182 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s9hm\" (UniqueName: \"kubernetes.io/projected/e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2-kube-api-access-4s9hm\") pod \"olm-operator-6b444d44fb-4hzfq\" (UID: \"e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.217235 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4qhs\" (UniqueName: \"kubernetes.io/projected/bdff7274-020d-47de-a573-391747c777ac-kube-api-access-v4qhs\") pod \"multus-admission-controller-857f4d67dd-27xhf\" (UID: \"bdff7274-020d-47de-a573-391747c777ac\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.217553 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86284abf-f706-432c-871d-5742dca5966b-config\") pod \"kube-controller-manager-operator-78b949d7b-m9fp4\" (UID: \"86284abf-f706-432c-871d-5742dca5966b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.217638 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38346635-3608-48de-967c-aef6ea2b0789-node-bootstrap-token\") pod \"machine-config-server-kmk4b\" (UID: \"38346635-3608-48de-967c-aef6ea2b0789\") " pod="openshift-machine-config-operator/machine-config-server-kmk4b" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.217698 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx9dk\" (UniqueName: \"kubernetes.io/projected/a97a7dc8-4167-4e3d-b315-473d5adcfe1b-kube-api-access-gx9dk\") pod \"openshift-config-operator-7777fb866f-6mqrk\" (UID: \"a97a7dc8-4167-4e3d-b315-473d5adcfe1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.217732 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-etcd-ca\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.217762 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bc9817-c469-4d6e-a6cc-cd0463962993-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gsf8f\" (UID: \"01bc9817-c469-4d6e-a6cc-cd0463962993\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.217789 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0956f442-216c-4be4-9c81-efcb02614c3f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6z5lx\" (UID: \"0956f442-216c-4be4-9c81-efcb02614c3f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.217818 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlrfp\" (UniqueName: \"kubernetes.io/projected/90271beb-156c-4e46-9965-b2d169d7cb67-kube-api-access-dlrfp\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.217885 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ae986417-8048-44d4-b110-6bbe3ab2ce7e-profile-collector-cert\") pod \"catalog-operator-68c6474976-4hzv4\" (UID: \"ae986417-8048-44d4-b110-6bbe3ab2ce7e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218003 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae79d44f-eef6-42b4-bd2b-50b9faece115-secret-volume\") pod \"collect-profiles-29520585-fl75z\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218123 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9sc9\" (UniqueName: \"kubernetes.io/projected/ae79d44f-eef6-42b4-bd2b-50b9faece115-kube-api-access-z9sc9\") pod \"collect-profiles-29520585-fl75z\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218194 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d131f500-cc08-4500-802b-9c7ccb8f5457-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gjjfn\" (UID: \"d131f500-cc08-4500-802b-9c7ccb8f5457\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218226 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae79d44f-eef6-42b4-bd2b-50b9faece115-config-volume\") pod \"collect-profiles-29520585-fl75z\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218254 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86284abf-f706-432c-871d-5742dca5966b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m9fp4\" (UID: \"86284abf-f706-432c-871d-5742dca5966b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218329 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-csi-data-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218358 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52fbaf98-06b3-4c96-8155-a94db62cdc56-proxy-tls\") pod \"machine-config-controller-84d6567774-bbbtz\" (UID: \"52fbaf98-06b3-4c96-8155-a94db62cdc56\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218417 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-serving-cert\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218446 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-registration-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218487 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0e0223-e440-4b15-8183-41940ec62701-serving-cert\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218516 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9747dc3f-ea55-4af7-8561-eded508bd884-proxy-tls\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218617 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f887\" (UniqueName: \"kubernetes.io/projected/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-kube-api-access-4f887\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218642 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26a661b7-74ed-490d-8003-bc3c6e7d8c4e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-th4pf\" (UID: \"26a661b7-74ed-490d-8003-bc3c6e7d8c4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218701 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4hzfq\" (UID: \"e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218747 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/90271beb-156c-4e46-9965-b2d169d7cb67-tmpfs\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218804 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj8c9\" (UniqueName: \"kubernetes.io/projected/5d9feb14-2511-4e1e-a78a-e737ae28770c-kube-api-access-wj8c9\") pod \"downloads-7954f5f757-j5fnw\" (UID: \"5d9feb14-2511-4e1e-a78a-e737ae28770c\") " pod="openshift-console/downloads-7954f5f757-j5fnw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218842 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-etcd-service-ca\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218863 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gnsr\" (UniqueName: \"kubernetes.io/projected/ae986417-8048-44d4-b110-6bbe3ab2ce7e-kube-api-access-5gnsr\") pod \"catalog-operator-68c6474976-4hzv4\" (UID: \"ae986417-8048-44d4-b110-6bbe3ab2ce7e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218888 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bdff7274-020d-47de-a573-391747c777ac-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-27xhf\" (UID: \"bdff7274-020d-47de-a573-391747c777ac\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218910 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-bound-sa-token\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.218964 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z29b2\" (UniqueName: \"kubernetes.io/projected/028cc490-1e41-4efa-b193-42ff552e7a15-kube-api-access-z29b2\") pod \"dns-default-b9r5t\" (UID: \"028cc490-1e41-4efa-b193-42ff552e7a15\") " pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219020 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c22d3e2-5990-4295-804d-318a7321bc22-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219039 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thbrc\" (UniqueName: \"kubernetes.io/projected/0956f442-216c-4be4-9c81-efcb02614c3f-kube-api-access-thbrc\") pod \"openshift-controller-manager-operator-756b6f6bc6-6z5lx\" (UID: \"0956f442-216c-4be4-9c81-efcb02614c3f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219192 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90271beb-156c-4e46-9965-b2d169d7cb67-webhook-cert\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219234 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlnh6\" (UniqueName: \"kubernetes.io/projected/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-kube-api-access-vlnh6\") pod \"marketplace-operator-79b997595-cr82j\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219308 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cr82j\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219359 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqfwx\" (UniqueName: \"kubernetes.io/projected/ead6a0b3-8183-4435-96ea-77026e4d9cf0-kube-api-access-dqfwx\") pod \"service-ca-operator-777779d784-rwdqt\" (UID: \"ead6a0b3-8183-4435-96ea-77026e4d9cf0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219391 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-plugins-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219545 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlrnk\" (UniqueName: \"kubernetes.io/projected/73fb725a-9a40-4283-8e3e-296294a08655-kube-api-access-mlrnk\") pod \"control-plane-machine-set-operator-78cbb6b69f-drcdg\" (UID: \"73fb725a-9a40-4283-8e3e-296294a08655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219657 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqv9d\" (UniqueName: \"kubernetes.io/projected/3ff0705e-83d4-4955-9a05-03dfec15075b-kube-api-access-nqv9d\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219691 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c22d3e2-5990-4295-804d-318a7321bc22-service-ca-bundle\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219714 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcg8\" (UniqueName: \"kubernetes.io/projected/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-kube-api-access-mmcg8\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219956 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9db\" (UniqueName: \"kubernetes.io/projected/9747dc3f-ea55-4af7-8561-eded508bd884-kube-api-access-2j9db\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.219999 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9747dc3f-ea55-4af7-8561-eded508bd884-images\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220029 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c6caef89-a08c-46ec-b2c8-af0f2b795b02-stats-auth\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220095 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-socket-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220122 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-mountpoint-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220229 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3dffd75-c805-4e30-b870-fdc5fd583c91-cert\") pod \"ingress-canary-8nms9\" (UID: \"d3dffd75-c805-4e30-b870-fdc5fd583c91\") " pod="openshift-ingress-canary/ingress-canary-8nms9" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220258 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2-srv-cert\") pod \"olm-operator-6b444d44fb-4hzfq\" (UID: \"e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220293 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a97a7dc8-4167-4e3d-b315-473d5adcfe1b-serving-cert\") pod \"openshift-config-operator-7777fb866f-6mqrk\" (UID: \"a97a7dc8-4167-4e3d-b315-473d5adcfe1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220396 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91c725e3-26cb-474c-a672-d76cdda6a5de-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220428 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c6caef89-a08c-46ec-b2c8-af0f2b795b02-metrics-certs\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220466 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv8k9\" (UniqueName: \"kubernetes.io/projected/6c22d3e2-5990-4295-804d-318a7321bc22-kube-api-access-wv8k9\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220653 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ff0705e-83d4-4955-9a05-03dfec15075b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220685 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a02ac473-c7bb-4702-ac42-f0e973d03f05-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.220724 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a97a7dc8-4167-4e3d-b315-473d5adcfe1b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6mqrk\" (UID: \"a97a7dc8-4167-4e3d-b315-473d5adcfe1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.226055 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a97a7dc8-4167-4e3d-b315-473d5adcfe1b-serving-cert\") pod \"openshift-config-operator-7777fb866f-6mqrk\" (UID: \"a97a7dc8-4167-4e3d-b315-473d5adcfe1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.227033 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c6caef89-a08c-46ec-b2c8-af0f2b795b02-default-certificate\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.227088 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc4bb\" (UniqueName: \"kubernetes.io/projected/c5b89f0c-c038-4eec-8942-bf236eb9ead0-kube-api-access-fc4bb\") pod \"dns-operator-744455d44c-d6q92\" (UID: \"c5b89f0c-c038-4eec-8942-bf236eb9ead0\") " pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.227121 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38346635-3608-48de-967c-aef6ea2b0789-certs\") pod \"machine-config-server-kmk4b\" (UID: \"38346635-3608-48de-967c-aef6ea2b0789\") " pod="openshift-machine-config-operator/machine-config-server-kmk4b" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.227131 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a97a7dc8-4167-4e3d-b315-473d5adcfe1b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6mqrk\" (UID: \"a97a7dc8-4167-4e3d-b315-473d5adcfe1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.227230 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.228639 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:13.728626078 +0000 UTC m=+151.421782268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.228697 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91c725e3-26cb-474c-a672-d76cdda6a5de-metrics-tls\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.229186 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a02ac473-c7bb-4702-ac42-f0e973d03f05-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.229241 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8045f9a7-b013-41be-9aef-270522765538-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ltl59\" (UID: \"8045f9a7-b013-41be-9aef-270522765538\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.229904 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv6bp\" (UniqueName: \"kubernetes.io/projected/09532720-4c09-46f9-9dc7-c3d201c74171-kube-api-access-cv6bp\") pod \"cluster-samples-operator-665b6dd947-8x9jr\" (UID: \"09532720-4c09-46f9-9dc7-c3d201c74171\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.230071 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsgdv\" (UniqueName: \"kubernetes.io/projected/38346635-3608-48de-967c-aef6ea2b0789-kube-api-access-bsgdv\") pod \"machine-config-server-kmk4b\" (UID: \"38346635-3608-48de-967c-aef6ea2b0789\") " pod="openshift-machine-config-operator/machine-config-server-kmk4b" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.230410 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfgrv\" (UniqueName: \"kubernetes.io/projected/c6caef89-a08c-46ec-b2c8-af0f2b795b02-kube-api-access-lfgrv\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.230485 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9747dc3f-ea55-4af7-8561-eded508bd884-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.230560 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c22d3e2-5990-4295-804d-318a7321bc22-config\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.230626 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d131f500-cc08-4500-802b-9c7ccb8f5457-config\") pod \"kube-apiserver-operator-766d6c64bb-gjjfn\" (UID: \"d131f500-cc08-4500-802b-9c7ccb8f5457\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.230648 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c22d3e2-5990-4295-804d-318a7321bc22-serving-cert\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.230770 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-tls\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.230816 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ae986417-8048-44d4-b110-6bbe3ab2ce7e-srv-cert\") pod \"catalog-operator-68c6474976-4hzv4\" (UID: \"ae986417-8048-44d4-b110-6bbe3ab2ce7e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.230915 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cr82j\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.230946 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/028cc490-1e41-4efa-b193-42ff552e7a15-metrics-tls\") pod \"dns-default-b9r5t\" (UID: \"028cc490-1e41-4efa-b193-42ff552e7a15\") " pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.230974 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ead6a0b3-8183-4435-96ea-77026e4d9cf0-serving-cert\") pod \"service-ca-operator-777779d784-rwdqt\" (UID: \"ead6a0b3-8183-4435-96ea-77026e4d9cf0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.231008 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/09532720-4c09-46f9-9dc7-c3d201c74171-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8x9jr\" (UID: \"09532720-4c09-46f9-9dc7-c3d201c74171\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.232153 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0e0223-e440-4b15-8183-41940ec62701-serving-cert\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.234417 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235073 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8045f9a7-b013-41be-9aef-270522765538-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ltl59\" (UID: \"8045f9a7-b013-41be-9aef-270522765538\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235381 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3f2423fe-728b-4236-9d03-04e3472c915e-signing-key\") pod \"service-ca-9c57cc56f-p27f4\" (UID: \"3f2423fe-728b-4236-9d03-04e3472c915e\") " pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235445 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6caef89-a08c-46ec-b2c8-af0f2b795b02-service-ca-bundle\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235551 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52fbaf98-06b3-4c96-8155-a94db62cdc56-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bbbtz\" (UID: \"52fbaf98-06b3-4c96-8155-a94db62cdc56\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235589 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-trusted-ca\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235625 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-client-ca\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235668 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-etcd-client\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235701 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8045f9a7-b013-41be-9aef-270522765538-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ltl59\" (UID: \"8045f9a7-b013-41be-9aef-270522765538\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235752 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3f2423fe-728b-4236-9d03-04e3472c915e-signing-cabundle\") pod \"service-ca-9c57cc56f-p27f4\" (UID: \"3f2423fe-728b-4236-9d03-04e3472c915e\") " pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235791 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a02ac473-c7bb-4702-ac42-f0e973d03f05-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235830 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgz6l\" (UniqueName: \"kubernetes.io/projected/3f2423fe-728b-4236-9d03-04e3472c915e-kube-api-access-xgz6l\") pod \"service-ca-9c57cc56f-p27f4\" (UID: \"3f2423fe-728b-4236-9d03-04e3472c915e\") " pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235883 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d131f500-cc08-4500-802b-9c7ccb8f5457-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gjjfn\" (UID: \"d131f500-cc08-4500-802b-9c7ccb8f5457\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235915 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9fvk\" (UniqueName: \"kubernetes.io/projected/20ff0b7b-b891-4fd4-bc71-2eb422f1fa9e-kube-api-access-v9fvk\") pod \"migrator-59844c95c7-mqgk9\" (UID: \"20ff0b7b-b891-4fd4-bc71-2eb422f1fa9e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.235996 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzfkg\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-kube-api-access-wzfkg\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236029 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/73fb725a-9a40-4283-8e3e-296294a08655-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-drcdg\" (UID: \"73fb725a-9a40-4283-8e3e-296294a08655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236061 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90271beb-156c-4e46-9965-b2d169d7cb67-apiservice-cert\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236088 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ff0705e-83d4-4955-9a05-03dfec15075b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236120 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/028cc490-1e41-4efa-b193-42ff552e7a15-config-volume\") pod \"dns-default-b9r5t\" (UID: \"028cc490-1e41-4efa-b193-42ff552e7a15\") " pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236198 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-config\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236232 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt5k4\" (UniqueName: \"kubernetes.io/projected/91c725e3-26cb-474c-a672-d76cdda6a5de-kube-api-access-mt5k4\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236263 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lnb5\" (UniqueName: \"kubernetes.io/projected/d3dffd75-c805-4e30-b870-fdc5fd583c91-kube-api-access-2lnb5\") pod \"ingress-canary-8nms9\" (UID: \"d3dffd75-c805-4e30-b870-fdc5fd583c91\") " pod="openshift-ingress-canary/ingress-canary-8nms9" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236311 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb8ht\" (UniqueName: \"kubernetes.io/projected/9c0e0223-e440-4b15-8183-41940ec62701-kube-api-access-fb8ht\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236344 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c5b89f0c-c038-4eec-8942-bf236eb9ead0-metrics-tls\") pod \"dns-operator-744455d44c-d6q92\" (UID: \"c5b89f0c-c038-4eec-8942-bf236eb9ead0\") " pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236373 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-549tv\" (UniqueName: \"kubernetes.io/projected/52fbaf98-06b3-4c96-8155-a94db62cdc56-kube-api-access-549tv\") pod \"machine-config-controller-84d6567774-bbbtz\" (UID: \"52fbaf98-06b3-4c96-8155-a94db62cdc56\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236414 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ff0705e-83d4-4955-9a05-03dfec15075b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236442 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bc9817-c469-4d6e-a6cc-cd0463962993-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gsf8f\" (UID: \"01bc9817-c469-4d6e-a6cc-cd0463962993\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236470 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91c725e3-26cb-474c-a672-d76cdda6a5de-trusted-ca\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236515 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-certificates\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236590 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-config\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236621 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead6a0b3-8183-4435-96ea-77026e4d9cf0-config\") pod \"service-ca-operator-777779d784-rwdqt\" (UID: \"ead6a0b3-8183-4435-96ea-77026e4d9cf0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236654 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvx2\" (UniqueName: \"kubernetes.io/projected/26a661b7-74ed-490d-8003-bc3c6e7d8c4e-kube-api-access-wgvx2\") pod \"package-server-manager-789f6589d5-th4pf\" (UID: \"26a661b7-74ed-490d-8003-bc3c6e7d8c4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236699 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxvz\" (UniqueName: \"kubernetes.io/projected/01bc9817-c469-4d6e-a6cc-cd0463962993-kube-api-access-fjxvz\") pod \"kube-storage-version-migrator-operator-b67b599dd-gsf8f\" (UID: \"01bc9817-c469-4d6e-a6cc-cd0463962993\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236729 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0956f442-216c-4be4-9c81-efcb02614c3f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6z5lx\" (UID: \"0956f442-216c-4be4-9c81-efcb02614c3f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.236757 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86284abf-f706-432c-871d-5742dca5966b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m9fp4\" (UID: \"86284abf-f706-432c-871d-5742dca5966b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.237622 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a02ac473-c7bb-4702-ac42-f0e973d03f05-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.238307 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-trusted-ca\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.238444 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-client-ca\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.239975 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-config\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.241507 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-certificates\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.245191 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-tls\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.246519 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/09532720-4c09-46f9-9dc7-c3d201c74171-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8x9jr\" (UID: \"09532720-4c09-46f9-9dc7-c3d201c74171\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.251582 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.272201 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.292864 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.312608 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.332278 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.338307 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.338479 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:13.838455338 +0000 UTC m=+151.531611518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.338603 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlrnk\" (UniqueName: \"kubernetes.io/projected/73fb725a-9a40-4283-8e3e-296294a08655-kube-api-access-mlrnk\") pod \"control-plane-machine-set-operator-78cbb6b69f-drcdg\" (UID: \"73fb725a-9a40-4283-8e3e-296294a08655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.338682 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqv9d\" (UniqueName: \"kubernetes.io/projected/3ff0705e-83d4-4955-9a05-03dfec15075b-kube-api-access-nqv9d\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.338710 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c22d3e2-5990-4295-804d-318a7321bc22-service-ca-bundle\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.339721 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c22d3e2-5990-4295-804d-318a7321bc22-service-ca-bundle\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.339792 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcg8\" (UniqueName: \"kubernetes.io/projected/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-kube-api-access-mmcg8\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.339843 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j9db\" (UniqueName: \"kubernetes.io/projected/9747dc3f-ea55-4af7-8561-eded508bd884-kube-api-access-2j9db\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.339891 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9747dc3f-ea55-4af7-8561-eded508bd884-images\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.339915 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c6caef89-a08c-46ec-b2c8-af0f2b795b02-stats-auth\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.339947 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-socket-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.339971 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-mountpoint-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.339996 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3dffd75-c805-4e30-b870-fdc5fd583c91-cert\") pod \"ingress-canary-8nms9\" (UID: \"d3dffd75-c805-4e30-b870-fdc5fd583c91\") " pod="openshift-ingress-canary/ingress-canary-8nms9" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340017 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2-srv-cert\") pod \"olm-operator-6b444d44fb-4hzfq\" (UID: \"e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340050 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91c725e3-26cb-474c-a672-d76cdda6a5de-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340078 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c6caef89-a08c-46ec-b2c8-af0f2b795b02-metrics-certs\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340106 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv8k9\" (UniqueName: \"kubernetes.io/projected/6c22d3e2-5990-4295-804d-318a7321bc22-kube-api-access-wv8k9\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340131 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ff0705e-83d4-4955-9a05-03dfec15075b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340175 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc4bb\" (UniqueName: \"kubernetes.io/projected/c5b89f0c-c038-4eec-8942-bf236eb9ead0-kube-api-access-fc4bb\") pod \"dns-operator-744455d44c-d6q92\" (UID: \"c5b89f0c-c038-4eec-8942-bf236eb9ead0\") " pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340200 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38346635-3608-48de-967c-aef6ea2b0789-certs\") pod \"machine-config-server-kmk4b\" (UID: \"38346635-3608-48de-967c-aef6ea2b0789\") " pod="openshift-machine-config-operator/machine-config-server-kmk4b" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340219 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c6caef89-a08c-46ec-b2c8-af0f2b795b02-default-certificate\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340261 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340290 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91c725e3-26cb-474c-a672-d76cdda6a5de-metrics-tls\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340318 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8045f9a7-b013-41be-9aef-270522765538-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ltl59\" (UID: \"8045f9a7-b013-41be-9aef-270522765538\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340373 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsgdv\" (UniqueName: \"kubernetes.io/projected/38346635-3608-48de-967c-aef6ea2b0789-kube-api-access-bsgdv\") pod \"machine-config-server-kmk4b\" (UID: \"38346635-3608-48de-967c-aef6ea2b0789\") " pod="openshift-machine-config-operator/machine-config-server-kmk4b" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340415 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfgrv\" (UniqueName: \"kubernetes.io/projected/c6caef89-a08c-46ec-b2c8-af0f2b795b02-kube-api-access-lfgrv\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340449 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c22d3e2-5990-4295-804d-318a7321bc22-config\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340482 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9747dc3f-ea55-4af7-8561-eded508bd884-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340728 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d131f500-cc08-4500-802b-9c7ccb8f5457-config\") pod \"kube-apiserver-operator-766d6c64bb-gjjfn\" (UID: \"d131f500-cc08-4500-802b-9c7ccb8f5457\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340756 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c22d3e2-5990-4295-804d-318a7321bc22-serving-cert\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340787 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ae986417-8048-44d4-b110-6bbe3ab2ce7e-srv-cert\") pod \"catalog-operator-68c6474976-4hzv4\" (UID: \"ae986417-8048-44d4-b110-6bbe3ab2ce7e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340815 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-mountpoint-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340818 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cr82j\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340925 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/028cc490-1e41-4efa-b193-42ff552e7a15-metrics-tls\") pod \"dns-default-b9r5t\" (UID: \"028cc490-1e41-4efa-b193-42ff552e7a15\") " pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.340970 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ead6a0b3-8183-4435-96ea-77026e4d9cf0-serving-cert\") pod \"service-ca-operator-777779d784-rwdqt\" (UID: \"ead6a0b3-8183-4435-96ea-77026e4d9cf0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341007 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8045f9a7-b013-41be-9aef-270522765538-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ltl59\" (UID: \"8045f9a7-b013-41be-9aef-270522765538\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341032 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3f2423fe-728b-4236-9d03-04e3472c915e-signing-key\") pod \"service-ca-9c57cc56f-p27f4\" (UID: \"3f2423fe-728b-4236-9d03-04e3472c915e\") " pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341086 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6caef89-a08c-46ec-b2c8-af0f2b795b02-service-ca-bundle\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.341109 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:13.841096698 +0000 UTC m=+151.534252998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341165 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52fbaf98-06b3-4c96-8155-a94db62cdc56-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bbbtz\" (UID: \"52fbaf98-06b3-4c96-8155-a94db62cdc56\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341203 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-etcd-client\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341232 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8045f9a7-b013-41be-9aef-270522765538-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ltl59\" (UID: \"8045f9a7-b013-41be-9aef-270522765538\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341265 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3f2423fe-728b-4236-9d03-04e3472c915e-signing-cabundle\") pod \"service-ca-9c57cc56f-p27f4\" (UID: \"3f2423fe-728b-4236-9d03-04e3472c915e\") " pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341292 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d131f500-cc08-4500-802b-9c7ccb8f5457-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gjjfn\" (UID: \"d131f500-cc08-4500-802b-9c7ccb8f5457\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341321 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgz6l\" (UniqueName: \"kubernetes.io/projected/3f2423fe-728b-4236-9d03-04e3472c915e-kube-api-access-xgz6l\") pod \"service-ca-9c57cc56f-p27f4\" (UID: \"3f2423fe-728b-4236-9d03-04e3472c915e\") " pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341387 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9fvk\" (UniqueName: \"kubernetes.io/projected/20ff0b7b-b891-4fd4-bc71-2eb422f1fa9e-kube-api-access-v9fvk\") pod \"migrator-59844c95c7-mqgk9\" (UID: \"20ff0b7b-b891-4fd4-bc71-2eb422f1fa9e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341445 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ff0705e-83d4-4955-9a05-03dfec15075b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341487 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/73fb725a-9a40-4283-8e3e-296294a08655-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-drcdg\" (UID: \"73fb725a-9a40-4283-8e3e-296294a08655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341515 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90271beb-156c-4e46-9965-b2d169d7cb67-apiservice-cert\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341558 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/028cc490-1e41-4efa-b193-42ff552e7a15-config-volume\") pod \"dns-default-b9r5t\" (UID: \"028cc490-1e41-4efa-b193-42ff552e7a15\") " pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341586 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lnb5\" (UniqueName: \"kubernetes.io/projected/d3dffd75-c805-4e30-b870-fdc5fd583c91-kube-api-access-2lnb5\") pod \"ingress-canary-8nms9\" (UID: \"d3dffd75-c805-4e30-b870-fdc5fd583c91\") " pod="openshift-ingress-canary/ingress-canary-8nms9" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341617 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-config\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341620 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c22d3e2-5990-4295-804d-318a7321bc22-config\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341645 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt5k4\" (UniqueName: \"kubernetes.io/projected/91c725e3-26cb-474c-a672-d76cdda6a5de-kube-api-access-mt5k4\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341683 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c5b89f0c-c038-4eec-8942-bf236eb9ead0-metrics-tls\") pod \"dns-operator-744455d44c-d6q92\" (UID: \"c5b89f0c-c038-4eec-8942-bf236eb9ead0\") " pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341710 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-549tv\" (UniqueName: \"kubernetes.io/projected/52fbaf98-06b3-4c96-8155-a94db62cdc56-kube-api-access-549tv\") pod \"machine-config-controller-84d6567774-bbbtz\" (UID: \"52fbaf98-06b3-4c96-8155-a94db62cdc56\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341736 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bc9817-c469-4d6e-a6cc-cd0463962993-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gsf8f\" (UID: \"01bc9817-c469-4d6e-a6cc-cd0463962993\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341764 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91c725e3-26cb-474c-a672-d76cdda6a5de-trusted-ca\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341791 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ff0705e-83d4-4955-9a05-03dfec15075b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341817 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvx2\" (UniqueName: \"kubernetes.io/projected/26a661b7-74ed-490d-8003-bc3c6e7d8c4e-kube-api-access-wgvx2\") pod \"package-server-manager-789f6589d5-th4pf\" (UID: \"26a661b7-74ed-490d-8003-bc3c6e7d8c4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341846 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead6a0b3-8183-4435-96ea-77026e4d9cf0-config\") pod \"service-ca-operator-777779d784-rwdqt\" (UID: \"ead6a0b3-8183-4435-96ea-77026e4d9cf0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341870 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjxvz\" (UniqueName: \"kubernetes.io/projected/01bc9817-c469-4d6e-a6cc-cd0463962993-kube-api-access-fjxvz\") pod \"kube-storage-version-migrator-operator-b67b599dd-gsf8f\" (UID: \"01bc9817-c469-4d6e-a6cc-cd0463962993\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341898 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0956f442-216c-4be4-9c81-efcb02614c3f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6z5lx\" (UID: \"0956f442-216c-4be4-9c81-efcb02614c3f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341927 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86284abf-f706-432c-871d-5742dca5966b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m9fp4\" (UID: \"86284abf-f706-432c-871d-5742dca5966b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341921 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9747dc3f-ea55-4af7-8561-eded508bd884-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341953 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s9hm\" (UniqueName: \"kubernetes.io/projected/e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2-kube-api-access-4s9hm\") pod \"olm-operator-6b444d44fb-4hzfq\" (UID: \"e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.341986 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4qhs\" (UniqueName: \"kubernetes.io/projected/bdff7274-020d-47de-a573-391747c777ac-kube-api-access-v4qhs\") pod \"multus-admission-controller-857f4d67dd-27xhf\" (UID: \"bdff7274-020d-47de-a573-391747c777ac\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342023 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86284abf-f706-432c-871d-5742dca5966b-config\") pod \"kube-controller-manager-operator-78b949d7b-m9fp4\" (UID: \"86284abf-f706-432c-871d-5742dca5966b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342057 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38346635-3608-48de-967c-aef6ea2b0789-node-bootstrap-token\") pod \"machine-config-server-kmk4b\" (UID: \"38346635-3608-48de-967c-aef6ea2b0789\") " pod="openshift-machine-config-operator/machine-config-server-kmk4b" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342084 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ae986417-8048-44d4-b110-6bbe3ab2ce7e-profile-collector-cert\") pod \"catalog-operator-68c6474976-4hzv4\" (UID: \"ae986417-8048-44d4-b110-6bbe3ab2ce7e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342125 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-etcd-ca\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342148 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bc9817-c469-4d6e-a6cc-cd0463962993-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gsf8f\" (UID: \"01bc9817-c469-4d6e-a6cc-cd0463962993\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342171 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0956f442-216c-4be4-9c81-efcb02614c3f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6z5lx\" (UID: \"0956f442-216c-4be4-9c81-efcb02614c3f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342200 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlrfp\" (UniqueName: \"kubernetes.io/projected/90271beb-156c-4e46-9965-b2d169d7cb67-kube-api-access-dlrfp\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342228 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9sc9\" (UniqueName: \"kubernetes.io/projected/ae79d44f-eef6-42b4-bd2b-50b9faece115-kube-api-access-z9sc9\") pod \"collect-profiles-29520585-fl75z\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342263 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae79d44f-eef6-42b4-bd2b-50b9faece115-secret-volume\") pod \"collect-profiles-29520585-fl75z\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342335 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d131f500-cc08-4500-802b-9c7ccb8f5457-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gjjfn\" (UID: \"d131f500-cc08-4500-802b-9c7ccb8f5457\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342364 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae79d44f-eef6-42b4-bd2b-50b9faece115-config-volume\") pod \"collect-profiles-29520585-fl75z\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342389 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86284abf-f706-432c-871d-5742dca5966b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m9fp4\" (UID: \"86284abf-f706-432c-871d-5742dca5966b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342417 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-csi-data-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342445 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52fbaf98-06b3-4c96-8155-a94db62cdc56-proxy-tls\") pod \"machine-config-controller-84d6567774-bbbtz\" (UID: \"52fbaf98-06b3-4c96-8155-a94db62cdc56\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342472 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-serving-cert\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342498 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-registration-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342508 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d131f500-cc08-4500-802b-9c7ccb8f5457-config\") pod \"kube-apiserver-operator-766d6c64bb-gjjfn\" (UID: \"d131f500-cc08-4500-802b-9c7ccb8f5457\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342528 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9747dc3f-ea55-4af7-8561-eded508bd884-proxy-tls\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342585 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f887\" (UniqueName: \"kubernetes.io/projected/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-kube-api-access-4f887\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342609 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26a661b7-74ed-490d-8003-bc3c6e7d8c4e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-th4pf\" (UID: \"26a661b7-74ed-490d-8003-bc3c6e7d8c4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342637 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4hzfq\" (UID: \"e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342682 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/90271beb-156c-4e46-9965-b2d169d7cb67-tmpfs\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342714 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-etcd-service-ca\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342740 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gnsr\" (UniqueName: \"kubernetes.io/projected/ae986417-8048-44d4-b110-6bbe3ab2ce7e-kube-api-access-5gnsr\") pod \"catalog-operator-68c6474976-4hzv4\" (UID: \"ae986417-8048-44d4-b110-6bbe3ab2ce7e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342767 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bdff7274-020d-47de-a573-391747c777ac-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-27xhf\" (UID: \"bdff7274-020d-47de-a573-391747c777ac\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342798 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z29b2\" (UniqueName: \"kubernetes.io/projected/028cc490-1e41-4efa-b193-42ff552e7a15-kube-api-access-z29b2\") pod \"dns-default-b9r5t\" (UID: \"028cc490-1e41-4efa-b193-42ff552e7a15\") " pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342825 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c22d3e2-5990-4295-804d-318a7321bc22-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342853 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thbrc\" (UniqueName: \"kubernetes.io/projected/0956f442-216c-4be4-9c81-efcb02614c3f-kube-api-access-thbrc\") pod \"openshift-controller-manager-operator-756b6f6bc6-6z5lx\" (UID: \"0956f442-216c-4be4-9c81-efcb02614c3f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342863 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ff0705e-83d4-4955-9a05-03dfec15075b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342878 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90271beb-156c-4e46-9965-b2d169d7cb67-webhook-cert\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342897 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlnh6\" (UniqueName: \"kubernetes.io/projected/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-kube-api-access-vlnh6\") pod \"marketplace-operator-79b997595-cr82j\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342917 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqfwx\" (UniqueName: \"kubernetes.io/projected/ead6a0b3-8183-4435-96ea-77026e4d9cf0-kube-api-access-dqfwx\") pod \"service-ca-operator-777779d784-rwdqt\" (UID: \"ead6a0b3-8183-4435-96ea-77026e4d9cf0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342933 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-plugins-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342947 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52fbaf98-06b3-4c96-8155-a94db62cdc56-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bbbtz\" (UID: \"52fbaf98-06b3-4c96-8155-a94db62cdc56\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342954 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cr82j\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.342012 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-socket-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.344461 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-registration-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.344731 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0956f442-216c-4be4-9c81-efcb02614c3f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6z5lx\" (UID: \"0956f442-216c-4be4-9c81-efcb02614c3f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.345436 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-etcd-ca\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.345821 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c22d3e2-5990-4295-804d-318a7321bc22-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.345949 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/90271beb-156c-4e46-9965-b2d169d7cb67-tmpfs\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.346063 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-csi-data-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.346423 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86284abf-f706-432c-871d-5742dca5966b-config\") pod \"kube-controller-manager-operator-78b949d7b-m9fp4\" (UID: \"86284abf-f706-432c-871d-5742dca5966b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.346426 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-plugins-dir\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.346448 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-etcd-service-ca\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.346855 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9747dc3f-ea55-4af7-8561-eded508bd884-images\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.347188 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d131f500-cc08-4500-802b-9c7ccb8f5457-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gjjfn\" (UID: \"d131f500-cc08-4500-802b-9c7ccb8f5457\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.347243 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-config\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.347860 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ff0705e-83d4-4955-9a05-03dfec15075b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.347869 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-etcd-client\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.350255 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0956f442-216c-4be4-9c81-efcb02614c3f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6z5lx\" (UID: \"0956f442-216c-4be4-9c81-efcb02614c3f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.351051 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c5b89f0c-c038-4eec-8942-bf236eb9ead0-metrics-tls\") pod \"dns-operator-744455d44c-d6q92\" (UID: \"c5b89f0c-c038-4eec-8942-bf236eb9ead0\") " pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.351284 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.351786 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86284abf-f706-432c-871d-5742dca5966b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-m9fp4\" (UID: \"86284abf-f706-432c-871d-5742dca5966b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.353732 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-serving-cert\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.354945 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c22d3e2-5990-4295-804d-318a7321bc22-serving-cert\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.371828 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.378376 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9747dc3f-ea55-4af7-8561-eded508bd884-proxy-tls\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.392652 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.411939 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.415291 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/91c725e3-26cb-474c-a672-d76cdda6a5de-metrics-tls\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.432938 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.444759 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.444984 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:13.944956199 +0000 UTC m=+151.638112379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.445133 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.445980 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:13.945952786 +0000 UTC m=+151.639108966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.459321 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.464280 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91c725e3-26cb-474c-a672-d76cdda6a5de-trusted-ca\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.472333 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.491992 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.512490 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.532636 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.547286 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.547460 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.047430823 +0000 UTC m=+151.740587003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.547801 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.548228 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.048211023 +0000 UTC m=+151.741367203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.552629 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.572015 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.592567 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.603513 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8045f9a7-b013-41be-9aef-270522765538-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ltl59\" (UID: \"8045f9a7-b013-41be-9aef-270522765538\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.611427 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.625018 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8045f9a7-b013-41be-9aef-270522765538-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ltl59\" (UID: \"8045f9a7-b013-41be-9aef-270522765538\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.632024 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.649501 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.650810 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.15077869 +0000 UTC m=+151.843934870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.650834 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.665372 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c6caef89-a08c-46ec-b2c8-af0f2b795b02-metrics-certs\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.671417 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.691518 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.706440 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c6caef89-a08c-46ec-b2c8-af0f2b795b02-default-certificate\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.711298 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.726411 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c6caef89-a08c-46ec-b2c8-af0f2b795b02-stats-auth\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.731274 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.744057 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6caef89-a08c-46ec-b2c8-af0f2b795b02-service-ca-bundle\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.751889 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.752203 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.752696 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.252671748 +0000 UTC m=+151.945827928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.771269 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.780580 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52fbaf98-06b3-4c96-8155-a94db62cdc56-proxy-tls\") pod \"machine-config-controller-84d6567774-bbbtz\" (UID: \"52fbaf98-06b3-4c96-8155-a94db62cdc56\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.791396 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.836185 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgtnz\" (UniqueName: \"kubernetes.io/projected/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-kube-api-access-vgtnz\") pod \"oauth-openshift-558db77b4-vv6v6\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.851593 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bs8z\" (UniqueName: \"kubernetes.io/projected/13dde5e3-1577-420f-9b33-4d89a1a8749a-kube-api-access-2bs8z\") pod \"console-f9d7485db-4xwqr\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.855058 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.856242 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.356216081 +0000 UTC m=+152.049372281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.864705 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.872486 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4zwp\" (UniqueName: \"kubernetes.io/projected/3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d-kube-api-access-v4zwp\") pod \"machine-approver-56656f9798-4wrlr\" (UID: \"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.901204 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbtx6\" (UniqueName: \"kubernetes.io/projected/c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98-kube-api-access-cbtx6\") pod \"apiserver-76f77b778f-fsxcr\" (UID: \"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98\") " pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.909884 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gshl8\" (UniqueName: \"kubernetes.io/projected/891ce392-5d04-4f40-bc6e-f0660568526e-kube-api-access-gshl8\") pod \"apiserver-7bbb656c7d-7vx4m\" (UID: \"891ce392-5d04-4f40-bc6e-f0660568526e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.910016 4814 request.go:700] Waited for 1.003944746s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.929814 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx26f\" (UniqueName: \"kubernetes.io/projected/e498024a-b042-4d7c-9f47-4140b465bd63-kube-api-access-xx26f\") pod \"controller-manager-879f6c89f-jt6sp\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.930075 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.931970 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.951124 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.959989 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.965549 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:13 crc kubenswrapper[4814]: E0216 09:48:13.973084 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.473033946 +0000 UTC m=+152.166190126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:13 crc kubenswrapper[4814]: I0216 09:48:13.982457 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.003723 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.011066 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.013678 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90271beb-156c-4e46-9965-b2d169d7cb67-apiservice-cert\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.014247 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90271beb-156c-4e46-9965-b2d169d7cb67-webhook-cert\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.015484 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.036249 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.055252 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns89n\" (UniqueName: \"kubernetes.io/projected/4357f219-ec6a-4ada-863f-60ec8dbe0636-kube-api-access-ns89n\") pod \"console-operator-58897d9998-gfngr\" (UID: \"4357f219-ec6a-4ada-863f-60ec8dbe0636\") " pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.063864 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.072782 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95h77\" (UniqueName: \"kubernetes.io/projected/b3d36256-4e8e-460d-ad98-eaaafbb76021-kube-api-access-95h77\") pod \"machine-api-operator-5694c8668f-4p95d\" (UID: \"b3d36256-4e8e-460d-ad98-eaaafbb76021\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.073737 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.073884 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.573862756 +0000 UTC m=+152.267018936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.075098 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.075864 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.575848439 +0000 UTC m=+152.269004619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.091130 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghvm\" (UniqueName: \"kubernetes.io/projected/379f8a26-453f-4cda-878a-8b3b04c3be54-kube-api-access-2ghvm\") pod \"openshift-apiserver-operator-796bbdcf4f-gss2f\" (UID: \"379f8a26-453f-4cda-878a-8b3b04c3be54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.098018 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.118025 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.125441 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4xwqr"] Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.136458 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.151139 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cr82j\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.158488 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.165633 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cr82j\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.172015 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.176224 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.177022 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.676985898 +0000 UTC m=+152.370142078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.177242 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.177891 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.677872441 +0000 UTC m=+152.371028621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.180890 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae79d44f-eef6-42b4-bd2b-50b9faece115-secret-volume\") pod \"collect-profiles-29520585-fl75z\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.181205 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4hzfq\" (UID: \"e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.182809 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ae986417-8048-44d4-b110-6bbe3ab2ce7e-profile-collector-cert\") pod \"catalog-operator-68c6474976-4hzv4\" (UID: \"ae986417-8048-44d4-b110-6bbe3ab2ce7e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.188914 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.191844 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.206185 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2-srv-cert\") pod \"olm-operator-6b444d44fb-4hzfq\" (UID: \"e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.213798 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.214000 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.227302 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ae986417-8048-44d4-b110-6bbe3ab2ce7e-srv-cert\") pod \"catalog-operator-68c6474976-4hzv4\" (UID: \"ae986417-8048-44d4-b110-6bbe3ab2ce7e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.231802 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.241274 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/26a661b7-74ed-490d-8003-bc3c6e7d8c4e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-th4pf\" (UID: \"26a661b7-74ed-490d-8003-bc3c6e7d8c4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.252971 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.256207 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae79d44f-eef6-42b4-bd2b-50b9faece115-config-volume\") pod \"collect-profiles-29520585-fl75z\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.271997 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.278905 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.279904 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.779882692 +0000 UTC m=+152.473038872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.292850 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.299966 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38346635-3608-48de-967c-aef6ea2b0789-node-bootstrap-token\") pod \"machine-config-server-kmk4b\" (UID: \"38346635-3608-48de-967c-aef6ea2b0789\") " pod="openshift-machine-config-operator/machine-config-server-kmk4b" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.322236 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.325041 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.331953 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.340740 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38346635-3608-48de-967c-aef6ea2b0789-certs\") pod \"machine-config-server-kmk4b\" (UID: \"38346635-3608-48de-967c-aef6ea2b0789\") " pod="openshift-machine-config-operator/machine-config-server-kmk4b" Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.344678 4814 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.344743 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f2423fe-728b-4236-9d03-04e3472c915e-signing-key podName:3f2423fe-728b-4236-9d03-04e3472c915e nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.844724036 +0000 UTC m=+152.537880216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/3f2423fe-728b-4236-9d03-04e3472c915e-signing-key") pod "service-ca-9c57cc56f-p27f4" (UID: "3f2423fe-728b-4236-9d03-04e3472c915e") : failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.344965 4814 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.344996 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ead6a0b3-8183-4435-96ea-77026e4d9cf0-serving-cert podName:ead6a0b3-8183-4435-96ea-77026e4d9cf0 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.844987754 +0000 UTC m=+152.538143934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ead6a0b3-8183-4435-96ea-77026e4d9cf0-serving-cert") pod "service-ca-operator-777779d784-rwdqt" (UID: "ead6a0b3-8183-4435-96ea-77026e4d9cf0") : failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.347703 4814 secret.go:188] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.347807 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdff7274-020d-47de-a573-391747c777ac-webhook-certs podName:bdff7274-020d-47de-a573-391747c777ac nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.847784258 +0000 UTC m=+152.540940628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/bdff7274-020d-47de-a573-391747c777ac-webhook-certs") pod "multus-admission-controller-857f4d67dd-27xhf" (UID: "bdff7274-020d-47de-a573-391747c777ac") : failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.347842 4814 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.347884 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73fb725a-9a40-4283-8e3e-296294a08655-control-plane-machine-set-operator-tls podName:73fb725a-9a40-4283-8e3e-296294a08655 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.8478707 +0000 UTC m=+152.541027090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/73fb725a-9a40-4283-8e3e-296294a08655-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-drcdg" (UID: "73fb725a-9a40-4283-8e3e-296294a08655") : failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.347950 4814 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.347990 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ead6a0b3-8183-4435-96ea-77026e4d9cf0-config podName:ead6a0b3-8183-4435-96ea-77026e4d9cf0 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.847974543 +0000 UTC m=+152.541130923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ead6a0b3-8183-4435-96ea-77026e4d9cf0-config") pod "service-ca-operator-777779d784-rwdqt" (UID: "ead6a0b3-8183-4435-96ea-77026e4d9cf0") : failed to sync configmap cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.348026 4814 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.348053 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/028cc490-1e41-4efa-b193-42ff552e7a15-config-volume podName:028cc490-1e41-4efa-b193-42ff552e7a15 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.848045305 +0000 UTC m=+152.541201695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/028cc490-1e41-4efa-b193-42ff552e7a15-config-volume") pod "dns-default-b9r5t" (UID: "028cc490-1e41-4efa-b193-42ff552e7a15") : failed to sync configmap cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.348089 4814 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.348120 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01bc9817-c469-4d6e-a6cc-cd0463962993-config podName:01bc9817-c469-4d6e-a6cc-cd0463962993 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.848112217 +0000 UTC m=+152.541268597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/01bc9817-c469-4d6e-a6cc-cd0463962993-config") pod "kube-storage-version-migrator-operator-b67b599dd-gsf8f" (UID: "01bc9817-c469-4d6e-a6cc-cd0463962993") : failed to sync configmap cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.349187 4814 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.349280 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f2423fe-728b-4236-9d03-04e3472c915e-signing-cabundle podName:3f2423fe-728b-4236-9d03-04e3472c915e nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.849265597 +0000 UTC m=+152.542421947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/3f2423fe-728b-4236-9d03-04e3472c915e-signing-cabundle") pod "service-ca-9c57cc56f-p27f4" (UID: "3f2423fe-728b-4236-9d03-04e3472c915e") : failed to sync configmap cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.349322 4814 secret.go:188] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.349356 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01bc9817-c469-4d6e-a6cc-cd0463962993-serving-cert podName:01bc9817-c469-4d6e-a6cc-cd0463962993 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.849347789 +0000 UTC m=+152.542504189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/01bc9817-c469-4d6e-a6cc-cd0463962993-serving-cert") pod "kube-storage-version-migrator-operator-b67b599dd-gsf8f" (UID: "01bc9817-c469-4d6e-a6cc-cd0463962993") : failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.349374 4814 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.349406 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/028cc490-1e41-4efa-b193-42ff552e7a15-metrics-tls podName:028cc490-1e41-4efa-b193-42ff552e7a15 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.84939351 +0000 UTC m=+152.542549910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/028cc490-1e41-4efa-b193-42ff552e7a15-metrics-tls") pod "dns-default-b9r5t" (UID: "028cc490-1e41-4efa-b193-42ff552e7a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.349427 4814 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.349453 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3dffd75-c805-4e30-b870-fdc5fd583c91-cert podName:d3dffd75-c805-4e30-b870-fdc5fd583c91 nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.849446022 +0000 UTC m=+152.542602402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d3dffd75-c805-4e30-b870-fdc5fd583c91-cert") pod "ingress-canary-8nms9" (UID: "d3dffd75-c805-4e30-b870-fdc5fd583c91") : failed to sync secret cache: timed out waiting for the condition Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.351239 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.355182 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fsxcr"] Feb 16 09:48:14 crc kubenswrapper[4814]: W0216 09:48:14.366314 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0bc3cf9_b28c_4673_8b2c_ce1d2616ec98.slice/crio-652e78a1ead2194bb04113f77f56f703ea146afe0bb06d085dcd71a16658da78 WatchSource:0}: Error finding container 652e78a1ead2194bb04113f77f56f703ea146afe0bb06d085dcd71a16658da78: Status 404 returned error can't find the container with id 652e78a1ead2194bb04113f77f56f703ea146afe0bb06d085dcd71a16658da78 Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.372489 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.383997 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.384464 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.884451083 +0000 UTC m=+152.577607263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.392168 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.404874 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vv6v6"] Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.412579 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 09:48:14 crc kubenswrapper[4814]: W0216 09:48:14.425597 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf51c8b2c_1728_4385_a7a4_f55a2f7cc18a.slice/crio-9afb9b357836f1b42572f11eb7b19890a40405d9fe58cf763d765db6cea759a0 WatchSource:0}: Error finding container 9afb9b357836f1b42572f11eb7b19890a40405d9fe58cf763d765db6cea759a0: Status 404 returned error can't find the container with id 9afb9b357836f1b42572f11eb7b19890a40405d9fe58cf763d765db6cea759a0 Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.433733 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.452212 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.461230 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m"] Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.462658 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jt6sp"] Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.473563 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.477651 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f"] Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.485430 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.487214 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:14.987191543 +0000 UTC m=+152.680347723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.491565 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: W0216 09:48:14.504805 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379f8a26_453f_4cda_878a_8b3b04c3be54.slice/crio-59891efd84a7a0e4c122646de067803a8f81f6577e6ee1a18842d906cad4ffbd WatchSource:0}: Error finding container 59891efd84a7a0e4c122646de067803a8f81f6577e6ee1a18842d906cad4ffbd: Status 404 returned error can't find the container with id 59891efd84a7a0e4c122646de067803a8f81f6577e6ee1a18842d906cad4ffbd Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.512216 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.532179 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.536192 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4p95d"] Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.552044 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.572147 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.583232 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gfngr"] Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.589657 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.590181 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:15.09016043 +0000 UTC m=+152.783316610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.593690 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: W0216 09:48:14.595253 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3d36256_4e8e_460d_ad98_eaaafbb76021.slice/crio-8a7bcfefa8e5aa11c2323cd7c456050755c93705702f8b1748899c9a906019ea WatchSource:0}: Error finding container 8a7bcfefa8e5aa11c2323cd7c456050755c93705702f8b1748899c9a906019ea: Status 404 returned error can't find the container with id 8a7bcfefa8e5aa11c2323cd7c456050755c93705702f8b1748899c9a906019ea Feb 16 09:48:14 crc kubenswrapper[4814]: W0216 09:48:14.600416 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4357f219_ec6a_4ada_863f_60ec8dbe0636.slice/crio-ca51cc90418cfbb76fc5b91883f7e1c22adf4fc741591a32843a62c3958444e6 WatchSource:0}: Error finding container ca51cc90418cfbb76fc5b91883f7e1c22adf4fc741591a32843a62c3958444e6: Status 404 returned error can't find the container with id ca51cc90418cfbb76fc5b91883f7e1c22adf4fc741591a32843a62c3958444e6 Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.612334 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.633308 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.651770 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.671729 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.691262 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.691379 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.691739 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:15.19171585 +0000 UTC m=+152.884872030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.697495 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.698120 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:15.19809389 +0000 UTC m=+152.891250070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.712053 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.732010 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.756835 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.772569 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.791976 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.799042 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.799911 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:15.299869755 +0000 UTC m=+152.993025935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.812229 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.832342 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.851649 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.874094 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.886788 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" event={"ID":"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a","Type":"ContainerStarted","Data":"5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.886841 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" event={"ID":"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a","Type":"ContainerStarted","Data":"9afb9b357836f1b42572f11eb7b19890a40405d9fe58cf763d765db6cea759a0"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.892265 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.892689 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" event={"ID":"e498024a-b042-4d7c-9f47-4140b465bd63","Type":"ContainerStarted","Data":"3eb8ef92c17a91eb3541164a619fa713057d16381ff10bdd124c9e6d8241c13f"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.892780 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.892805 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" event={"ID":"e498024a-b042-4d7c-9f47-4140b465bd63","Type":"ContainerStarted","Data":"b1588a51f03b6b5af77176d3ebbba0b4e705fbc551f2e7bf84cc37eeb9d94622"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.892825 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xwqr" event={"ID":"13dde5e3-1577-420f-9b33-4d89a1a8749a","Type":"ContainerStarted","Data":"76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.892845 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xwqr" event={"ID":"13dde5e3-1577-420f-9b33-4d89a1a8749a","Type":"ContainerStarted","Data":"2323eae6c14f13b0736236a8a88b2dd2c74d6c2a83c13e091f3b93b5aee30099"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.895307 4814 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jt6sp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.895361 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" podUID="e498024a-b042-4d7c-9f47-4140b465bd63" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.895597 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" event={"ID":"379f8a26-453f-4cda-878a-8b3b04c3be54","Type":"ContainerStarted","Data":"44c13aeeafab95518bb44be2c0e8dbbf2365950c8e97835abf6c7ddc6f40f382"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.895881 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" event={"ID":"379f8a26-453f-4cda-878a-8b3b04c3be54","Type":"ContainerStarted","Data":"59891efd84a7a0e4c122646de067803a8f81f6577e6ee1a18842d906cad4ffbd"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.898833 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" event={"ID":"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d","Type":"ContainerStarted","Data":"a85644c0d4f9e87c0791ebf1b1a8af13e75c2022e7f814ac2ce307cbf7f3bdd7"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.898869 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" event={"ID":"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d","Type":"ContainerStarted","Data":"283265139e7ae17d395234f9b20c6671142e1ae9311b61f0b6e99d4785523236"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.898883 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" event={"ID":"3f9c31c6-0f5f-4d58-8138-a5b4fa2b5f2d","Type":"ContainerStarted","Data":"c1d3793be15653da3099b0c5a5de787193b29f6ecb81e33fedb848b9e69db49d"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.900827 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.900929 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/028cc490-1e41-4efa-b193-42ff552e7a15-metrics-tls\") pod \"dns-default-b9r5t\" (UID: \"028cc490-1e41-4efa-b193-42ff552e7a15\") " pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.900962 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ead6a0b3-8183-4435-96ea-77026e4d9cf0-serving-cert\") pod \"service-ca-operator-777779d784-rwdqt\" (UID: \"ead6a0b3-8183-4435-96ea-77026e4d9cf0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.900996 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3f2423fe-728b-4236-9d03-04e3472c915e-signing-key\") pod \"service-ca-9c57cc56f-p27f4\" (UID: \"3f2423fe-728b-4236-9d03-04e3472c915e\") " pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.901042 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3f2423fe-728b-4236-9d03-04e3472c915e-signing-cabundle\") pod \"service-ca-9c57cc56f-p27f4\" (UID: \"3f2423fe-728b-4236-9d03-04e3472c915e\") " pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.901095 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/73fb725a-9a40-4283-8e3e-296294a08655-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-drcdg\" (UID: \"73fb725a-9a40-4283-8e3e-296294a08655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.901123 4814 generic.go:334] "Generic (PLEG): container finished" podID="c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98" containerID="d8456138cff8bfc11c3540ce428995f099f083cfabb12b8eda2ad57a98c6a447" exitCode=0 Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.901198 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" event={"ID":"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98","Type":"ContainerDied","Data":"d8456138cff8bfc11c3540ce428995f099f083cfabb12b8eda2ad57a98c6a447"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.901222 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" event={"ID":"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98","Type":"ContainerStarted","Data":"652e78a1ead2194bb04113f77f56f703ea146afe0bb06d085dcd71a16658da78"} Feb 16 09:48:14 crc kubenswrapper[4814]: E0216 09:48:14.901620 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:15.401595469 +0000 UTC m=+153.094751649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.902105 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/028cc490-1e41-4efa-b193-42ff552e7a15-config-volume\") pod \"dns-default-b9r5t\" (UID: \"028cc490-1e41-4efa-b193-42ff552e7a15\") " pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.902190 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bc9817-c469-4d6e-a6cc-cd0463962993-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gsf8f\" (UID: \"01bc9817-c469-4d6e-a6cc-cd0463962993\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.902217 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead6a0b3-8183-4435-96ea-77026e4d9cf0-config\") pod \"service-ca-operator-777779d784-rwdqt\" (UID: \"ead6a0b3-8183-4435-96ea-77026e4d9cf0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.902321 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bc9817-c469-4d6e-a6cc-cd0463962993-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gsf8f\" (UID: \"01bc9817-c469-4d6e-a6cc-cd0463962993\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.902501 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bdff7274-020d-47de-a573-391747c777ac-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-27xhf\" (UID: \"bdff7274-020d-47de-a573-391747c777ac\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.902698 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3dffd75-c805-4e30-b870-fdc5fd583c91-cert\") pod \"ingress-canary-8nms9\" (UID: \"d3dffd75-c805-4e30-b870-fdc5fd583c91\") " pod="openshift-ingress-canary/ingress-canary-8nms9" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.903224 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/028cc490-1e41-4efa-b193-42ff552e7a15-config-volume\") pod \"dns-default-b9r5t\" (UID: \"028cc490-1e41-4efa-b193-42ff552e7a15\") " pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.903911 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead6a0b3-8183-4435-96ea-77026e4d9cf0-config\") pod \"service-ca-operator-777779d784-rwdqt\" (UID: \"ead6a0b3-8183-4435-96ea-77026e4d9cf0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.904043 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01bc9817-c469-4d6e-a6cc-cd0463962993-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gsf8f\" (UID: \"01bc9817-c469-4d6e-a6cc-cd0463962993\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.904245 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" event={"ID":"b3d36256-4e8e-460d-ad98-eaaafbb76021","Type":"ContainerStarted","Data":"6368bdb5e471819edcd0d1f078aa310d253adec9613d176eb971a8fe8454f4cc"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.904311 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" event={"ID":"b3d36256-4e8e-460d-ad98-eaaafbb76021","Type":"ContainerStarted","Data":"cf4dc700f74941641f23d023d686015d73adba0aff5d7b84020b879b8e1af04e"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.904321 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" event={"ID":"b3d36256-4e8e-460d-ad98-eaaafbb76021","Type":"ContainerStarted","Data":"8a7bcfefa8e5aa11c2323cd7c456050755c93705702f8b1748899c9a906019ea"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.906332 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3f2423fe-728b-4236-9d03-04e3472c915e-signing-cabundle\") pod \"service-ca-9c57cc56f-p27f4\" (UID: \"3f2423fe-728b-4236-9d03-04e3472c915e\") " pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.910406 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bdff7274-020d-47de-a573-391747c777ac-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-27xhf\" (UID: \"bdff7274-020d-47de-a573-391747c777ac\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.910438 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3dffd75-c805-4e30-b870-fdc5fd583c91-cert\") pod \"ingress-canary-8nms9\" (UID: \"d3dffd75-c805-4e30-b870-fdc5fd583c91\") " pod="openshift-ingress-canary/ingress-canary-8nms9" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.910455 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01bc9817-c469-4d6e-a6cc-cd0463962993-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gsf8f\" (UID: \"01bc9817-c469-4d6e-a6cc-cd0463962993\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.910644 4814 request.go:700] Waited for 1.875809012s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.910943 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/028cc490-1e41-4efa-b193-42ff552e7a15-metrics-tls\") pod \"dns-default-b9r5t\" (UID: \"028cc490-1e41-4efa-b193-42ff552e7a15\") " pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.911017 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/73fb725a-9a40-4283-8e3e-296294a08655-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-drcdg\" (UID: \"73fb725a-9a40-4283-8e3e-296294a08655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.912080 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ead6a0b3-8183-4435-96ea-77026e4d9cf0-serving-cert\") pod \"service-ca-operator-777779d784-rwdqt\" (UID: \"ead6a0b3-8183-4435-96ea-77026e4d9cf0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.913280 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" event={"ID":"891ce392-5d04-4f40-bc6e-f0660568526e","Type":"ContainerStarted","Data":"e62c051bffa5922bac3983c323f3d9e5d121208f71e4a04d7dbb819724ddf7ac"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.913332 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" event={"ID":"891ce392-5d04-4f40-bc6e-f0660568526e","Type":"ContainerStarted","Data":"70b14927b0f2bf74eff2c434cf5e75c144cb9840a1d9065078fda5f0f0268b76"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.914124 4814 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.921192 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gfngr" event={"ID":"4357f219-ec6a-4ada-863f-60ec8dbe0636","Type":"ContainerStarted","Data":"d82fe6270d512c25e64c86fa44222abfa82d1fe245b9aa96d2c572623d3ffe44"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.921226 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gfngr" event={"ID":"4357f219-ec6a-4ada-863f-60ec8dbe0636","Type":"ContainerStarted","Data":"ca51cc90418cfbb76fc5b91883f7e1c22adf4fc741591a32843a62c3958444e6"} Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.921517 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.924213 4814 patch_prober.go:28] interesting pod/console-operator-58897d9998-gfngr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.924286 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gfngr" podUID="4357f219-ec6a-4ada-863f-60ec8dbe0636" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.925267 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3f2423fe-728b-4236-9d03-04e3472c915e-signing-key\") pod \"service-ca-9c57cc56f-p27f4\" (UID: \"3f2423fe-728b-4236-9d03-04e3472c915e\") " pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.971453 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx9dk\" (UniqueName: \"kubernetes.io/projected/a97a7dc8-4167-4e3d-b315-473d5adcfe1b-kube-api-access-gx9dk\") pod \"openshift-config-operator-7777fb866f-6mqrk\" (UID: \"a97a7dc8-4167-4e3d-b315-473d5adcfe1b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:14 crc kubenswrapper[4814]: I0216 09:48:14.993278 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj8c9\" (UniqueName: \"kubernetes.io/projected/5d9feb14-2511-4e1e-a78a-e737ae28770c-kube-api-access-wj8c9\") pod \"downloads-7954f5f757-j5fnw\" (UID: \"5d9feb14-2511-4e1e-a78a-e737ae28770c\") " pod="openshift-console/downloads-7954f5f757-j5fnw" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.005397 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.010323 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-bound-sa-token\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:15 crc kubenswrapper[4814]: E0216 09:48:15.011267 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:15.511237143 +0000 UTC m=+153.204393483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.027952 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv6bp\" (UniqueName: \"kubernetes.io/projected/09532720-4c09-46f9-9dc7-c3d201c74171-kube-api-access-cv6bp\") pod \"cluster-samples-operator-665b6dd947-8x9jr\" (UID: \"09532720-4c09-46f9-9dc7-c3d201c74171\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.049527 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb8ht\" (UniqueName: \"kubernetes.io/projected/9c0e0223-e440-4b15-8183-41940ec62701-kube-api-access-fb8ht\") pod \"route-controller-manager-6576b87f9c-d67m2\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.053148 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.058130 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-j5fnw" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.066695 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzfkg\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-kube-api-access-wzfkg\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.074146 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.089975 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlrnk\" (UniqueName: \"kubernetes.io/projected/73fb725a-9a40-4283-8e3e-296294a08655-kube-api-access-mlrnk\") pod \"control-plane-machine-set-operator-78cbb6b69f-drcdg\" (UID: \"73fb725a-9a40-4283-8e3e-296294a08655\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.111336 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqv9d\" (UniqueName: \"kubernetes.io/projected/3ff0705e-83d4-4955-9a05-03dfec15075b-kube-api-access-nqv9d\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.124863 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:15 crc kubenswrapper[4814]: E0216 09:48:15.125680 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:15.625650605 +0000 UTC m=+153.318806785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.138457 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcg8\" (UniqueName: \"kubernetes.io/projected/da041f6a-ea31-4e70-b695-aa0fd7e0ce85-kube-api-access-mmcg8\") pod \"csi-hostpathplugin-7phc6\" (UID: \"da041f6a-ea31-4e70-b695-aa0fd7e0ce85\") " pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.141073 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.152187 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j9db\" (UniqueName: \"kubernetes.io/projected/9747dc3f-ea55-4af7-8561-eded508bd884-kube-api-access-2j9db\") pod \"machine-config-operator-74547568cd-hbcgg\" (UID: \"9747dc3f-ea55-4af7-8561-eded508bd884\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.168890 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.173125 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91c725e3-26cb-474c-a672-d76cdda6a5de-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.193298 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc4bb\" (UniqueName: \"kubernetes.io/projected/c5b89f0c-c038-4eec-8942-bf236eb9ead0-kube-api-access-fc4bb\") pod \"dns-operator-744455d44c-d6q92\" (UID: \"c5b89f0c-c038-4eec-8942-bf236eb9ead0\") " pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.226671 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:15 crc kubenswrapper[4814]: E0216 09:48:15.227414 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:15.727387629 +0000 UTC m=+153.420543809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.234860 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8045f9a7-b013-41be-9aef-270522765538-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ltl59\" (UID: \"8045f9a7-b013-41be-9aef-270522765538\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.239229 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsgdv\" (UniqueName: \"kubernetes.io/projected/38346635-3608-48de-967c-aef6ea2b0789-kube-api-access-bsgdv\") pod \"machine-config-server-kmk4b\" (UID: \"38346635-3608-48de-967c-aef6ea2b0789\") " pod="openshift-machine-config-operator/machine-config-server-kmk4b" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.251780 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv8k9\" (UniqueName: \"kubernetes.io/projected/6c22d3e2-5990-4295-804d-318a7321bc22-kube-api-access-wv8k9\") pod \"authentication-operator-69f744f599-swmkw\" (UID: \"6c22d3e2-5990-4295-804d-318a7321bc22\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.273957 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfgrv\" (UniqueName: \"kubernetes.io/projected/c6caef89-a08c-46ec-b2c8-af0f2b795b02-kube-api-access-lfgrv\") pod \"router-default-5444994796-9kljz\" (UID: \"c6caef89-a08c-46ec-b2c8-af0f2b795b02\") " pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.286614 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kmk4b" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.292490 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgz6l\" (UniqueName: \"kubernetes.io/projected/3f2423fe-728b-4236-9d03-04e3472c915e-kube-api-access-xgz6l\") pod \"service-ca-9c57cc56f-p27f4\" (UID: \"3f2423fe-728b-4236-9d03-04e3472c915e\") " pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.313011 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9fvk\" (UniqueName: \"kubernetes.io/projected/20ff0b7b-b891-4fd4-bc71-2eb422f1fa9e-kube-api-access-v9fvk\") pod \"migrator-59844c95c7-mqgk9\" (UID: \"20ff0b7b-b891-4fd4-bc71-2eb422f1fa9e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.325387 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ff0705e-83d4-4955-9a05-03dfec15075b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-28lnk\" (UID: \"3ff0705e-83d4-4955-9a05-03dfec15075b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.329292 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:15 crc kubenswrapper[4814]: E0216 09:48:15.330057 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:15.830038408 +0000 UTC m=+153.523194588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.330406 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" Feb 16 09:48:15 crc kubenswrapper[4814]: W0216 09:48:15.332090 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38346635_3608_48de_967c_aef6ea2b0789.slice/crio-7c83e1c3076836c335a7eeccbc6136501853a6f7b4a0cbbd3d0a07027c30242e WatchSource:0}: Error finding container 7c83e1c3076836c335a7eeccbc6136501853a6f7b4a0cbbd3d0a07027c30242e: Status 404 returned error can't find the container with id 7c83e1c3076836c335a7eeccbc6136501853a6f7b4a0cbbd3d0a07027c30242e Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.345457 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.349335 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvx2\" (UniqueName: \"kubernetes.io/projected/26a661b7-74ed-490d-8003-bc3c6e7d8c4e-kube-api-access-wgvx2\") pod \"package-server-manager-789f6589d5-th4pf\" (UID: \"26a661b7-74ed-490d-8003-bc3c6e7d8c4e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.374245 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjxvz\" (UniqueName: \"kubernetes.io/projected/01bc9817-c469-4d6e-a6cc-cd0463962993-kube-api-access-fjxvz\") pod \"kube-storage-version-migrator-operator-b67b599dd-gsf8f\" (UID: \"01bc9817-c469-4d6e-a6cc-cd0463962993\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.401193 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-j5fnw"] Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.401368 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lnb5\" (UniqueName: \"kubernetes.io/projected/d3dffd75-c805-4e30-b870-fdc5fd583c91-kube-api-access-2lnb5\") pod \"ingress-canary-8nms9\" (UID: \"d3dffd75-c805-4e30-b870-fdc5fd583c91\") " pod="openshift-ingress-canary/ingress-canary-8nms9" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.402658 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8nms9" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.413494 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f887\" (UniqueName: \"kubernetes.io/projected/30b7584a-b56a-4a3c-9f53-3d5105ae1c93-kube-api-access-4f887\") pod \"etcd-operator-b45778765-5kfsf\" (UID: \"30b7584a-b56a-4a3c-9f53-3d5105ae1c93\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.417185 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.420873 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7phc6" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.425148 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.433182 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.433230 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4qhs\" (UniqueName: \"kubernetes.io/projected/bdff7274-020d-47de-a573-391747c777ac-kube-api-access-v4qhs\") pod \"multus-admission-controller-857f4d67dd-27xhf\" (UID: \"bdff7274-020d-47de-a573-391747c777ac\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" Feb 16 09:48:15 crc kubenswrapper[4814]: E0216 09:48:15.433609 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:15.93359261 +0000 UTC m=+153.626748790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.433884 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.441602 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.450052 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk"] Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.453511 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s9hm\" (UniqueName: \"kubernetes.io/projected/e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2-kube-api-access-4s9hm\") pod \"olm-operator-6b444d44fb-4hzfq\" (UID: \"e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.459482 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr"] Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.471997 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt5k4\" (UniqueName: \"kubernetes.io/projected/91c725e3-26cb-474c-a672-d76cdda6a5de-kube-api-access-mt5k4\") pod \"ingress-operator-5b745b69d9-nxnq2\" (UID: \"91c725e3-26cb-474c-a672-d76cdda6a5de\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.483031 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.484680 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.487683 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-549tv\" (UniqueName: \"kubernetes.io/projected/52fbaf98-06b3-4c96-8155-a94db62cdc56-kube-api-access-549tv\") pod \"machine-config-controller-84d6567774-bbbtz\" (UID: \"52fbaf98-06b3-4c96-8155-a94db62cdc56\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.490056 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.500923 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2"] Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.508966 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlrfp\" (UniqueName: \"kubernetes.io/projected/90271beb-156c-4e46-9965-b2d169d7cb67-kube-api-access-dlrfp\") pod \"packageserver-d55dfcdfc-xdw74\" (UID: \"90271beb-156c-4e46-9965-b2d169d7cb67\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.509041 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.526291 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.529063 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9sc9\" (UniqueName: \"kubernetes.io/projected/ae79d44f-eef6-42b4-bd2b-50b9faece115-kube-api-access-z9sc9\") pod \"collect-profiles-29520585-fl75z\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.534793 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:15 crc kubenswrapper[4814]: E0216 09:48:15.535139 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:16.035124219 +0000 UTC m=+153.728280399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.539268 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.556350 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d131f500-cc08-4500-802b-9c7ccb8f5457-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gjjfn\" (UID: \"d131f500-cc08-4500-802b-9c7ccb8f5457\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.560621 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.570219 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.575478 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gnsr\" (UniqueName: \"kubernetes.io/projected/ae986417-8048-44d4-b110-6bbe3ab2ce7e-kube-api-access-5gnsr\") pod \"catalog-operator-68c6474976-4hzv4\" (UID: \"ae986417-8048-44d4-b110-6bbe3ab2ce7e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.578668 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.581590 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg"] Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.593008 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86284abf-f706-432c-871d-5742dca5966b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-m9fp4\" (UID: \"86284abf-f706-432c-871d-5742dca5966b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.608917 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" Feb 16 09:48:15 crc kubenswrapper[4814]: W0216 09:48:15.614402 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c0e0223_e440_4b15_8183_41940ec62701.slice/crio-b58411e5c3f31ea198bca54e1cd9ff97fee1d410dca6dc2821cdd46a9d3ee53f WatchSource:0}: Error finding container b58411e5c3f31ea198bca54e1cd9ff97fee1d410dca6dc2821cdd46a9d3ee53f: Status 404 returned error can't find the container with id b58411e5c3f31ea198bca54e1cd9ff97fee1d410dca6dc2821cdd46a9d3ee53f Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.635792 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.635820 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqfwx\" (UniqueName: \"kubernetes.io/projected/ead6a0b3-8183-4435-96ea-77026e4d9cf0-kube-api-access-dqfwx\") pod \"service-ca-operator-777779d784-rwdqt\" (UID: \"ead6a0b3-8183-4435-96ea-77026e4d9cf0\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:15 crc kubenswrapper[4814]: E0216 09:48:15.635976 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:16.135963749 +0000 UTC m=+153.829119929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.636009 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:15 crc kubenswrapper[4814]: E0216 09:48:15.636294 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:16.136288329 +0000 UTC m=+153.829444509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:15 crc kubenswrapper[4814]: W0216 09:48:15.640885 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9747dc3f_ea55_4af7_8561_eded508bd884.slice/crio-cc9975f12ca31455622c0530d94bf35d2449d37ea2c15731fab29618708100b8 WatchSource:0}: Error finding container cc9975f12ca31455622c0530d94bf35d2449d37ea2c15731fab29618708100b8: Status 404 returned error can't find the container with id cc9975f12ca31455622c0530d94bf35d2449d37ea2c15731fab29618708100b8 Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.643969 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thbrc\" (UniqueName: \"kubernetes.io/projected/0956f442-216c-4be4-9c81-efcb02614c3f-kube-api-access-thbrc\") pod \"openshift-controller-manager-operator-756b6f6bc6-6z5lx\" (UID: \"0956f442-216c-4be4-9c81-efcb02614c3f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.657014 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z29b2\" (UniqueName: \"kubernetes.io/projected/028cc490-1e41-4efa-b193-42ff552e7a15-kube-api-access-z29b2\") pod \"dns-default-b9r5t\" (UID: \"028cc490-1e41-4efa-b193-42ff552e7a15\") " pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.661195 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.669994 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlnh6\" (UniqueName: \"kubernetes.io/projected/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-kube-api-access-vlnh6\") pod \"marketplace-operator-79b997595-cr82j\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.682360 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.684680 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg"] Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.691836 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.737499 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:15 crc kubenswrapper[4814]: E0216 09:48:15.737827 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:16.237810587 +0000 UTC m=+153.930966767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.737928 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-p27f4"] Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.746754 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.759548 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.760596 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.804156 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8nms9"] Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.807993 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.838831 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:15 crc kubenswrapper[4814]: E0216 09:48:15.839106 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:16.339093629 +0000 UTC m=+154.032249809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.856259 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.876264 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk"] Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.939864 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:15 crc kubenswrapper[4814]: E0216 09:48:15.940323 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:16.44029986 +0000 UTC m=+154.133456030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.941688 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5kfsf"] Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.949048 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9kljz" event={"ID":"c6caef89-a08c-46ec-b2c8-af0f2b795b02","Type":"ContainerStarted","Data":"d2416676bab000b20a8488c092c76e9d0af1c0c53e692ba20c5c47ec4097eb98"} Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.963689 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" event={"ID":"9747dc3f-ea55-4af7-8561-eded508bd884","Type":"ContainerStarted","Data":"cc9975f12ca31455622c0530d94bf35d2449d37ea2c15731fab29618708100b8"} Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.979123 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" event={"ID":"09532720-4c09-46f9-9dc7-c3d201c74171","Type":"ContainerStarted","Data":"d4c10d766e9bf14784544df21202826ea93deebb14584f97e37a7231516292c5"} Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.980454 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" event={"ID":"73fb725a-9a40-4283-8e3e-296294a08655","Type":"ContainerStarted","Data":"4d8e5b8bc63f13bd090286bb3802af814701e650f99481ee424422a9d7ea5d69"} Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.988404 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" event={"ID":"3f2423fe-728b-4236-9d03-04e3472c915e","Type":"ContainerStarted","Data":"0bd923856ec6de4cdfaf6d64466277ebe072015c983dc19d7442c094bf8f9d25"} Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.989940 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" event={"ID":"a97a7dc8-4167-4e3d-b315-473d5adcfe1b","Type":"ContainerStarted","Data":"b747be3f04f18c68b5148762554a43150b9403b9689c2646641032097647d7fa"} Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.992414 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" event={"ID":"9c0e0223-e440-4b15-8183-41940ec62701","Type":"ContainerStarted","Data":"b58411e5c3f31ea198bca54e1cd9ff97fee1d410dca6dc2821cdd46a9d3ee53f"} Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.999164 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" event={"ID":"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98","Type":"ContainerStarted","Data":"836db6002641bc11f483b79d4e83fac16ef3c45b49a92082c5820655572e53f2"} Feb 16 09:48:15 crc kubenswrapper[4814]: I0216 09:48:15.999219 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" event={"ID":"c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98","Type":"ContainerStarted","Data":"26404f71c040ea37873d3b2ca735a3686b129b0b929a29a739ce639e10c8cbbc"} Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.001437 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kmk4b" event={"ID":"38346635-3608-48de-967c-aef6ea2b0789","Type":"ContainerStarted","Data":"a631723874ef3aebdaae4121bc4cd382db559571f3338c7994288938fe36886a"} Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.001487 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kmk4b" event={"ID":"38346635-3608-48de-967c-aef6ea2b0789","Type":"ContainerStarted","Data":"7c83e1c3076836c335a7eeccbc6136501853a6f7b4a0cbbd3d0a07027c30242e"} Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.002933 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-j5fnw" event={"ID":"5d9feb14-2511-4e1e-a78a-e737ae28770c","Type":"ContainerStarted","Data":"547c3dc3c60e2653ef5157d057e5e40ca4b7a2d8735ff8546555250216611005"} Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.005872 4814 generic.go:334] "Generic (PLEG): container finished" podID="891ce392-5d04-4f40-bc6e-f0660568526e" containerID="e62c051bffa5922bac3983c323f3d9e5d121208f71e4a04d7dbb819724ddf7ac" exitCode=0 Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.006099 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" event={"ID":"891ce392-5d04-4f40-bc6e-f0660568526e","Type":"ContainerDied","Data":"e62c051bffa5922bac3983c323f3d9e5d121208f71e4a04d7dbb819724ddf7ac"} Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.006133 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" event={"ID":"891ce392-5d04-4f40-bc6e-f0660568526e","Type":"ContainerStarted","Data":"fe3af19250eeb6291a95cb5c701d6fba314c811bafc4d1be8bea95945a7cd5ed"} Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.006893 4814 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jt6sp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.006935 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" podUID="e498024a-b042-4d7c-9f47-4140b465bd63" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.007693 4814 patch_prober.go:28] interesting pod/console-operator-58897d9998-gfngr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.007765 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gfngr" podUID="4357f219-ec6a-4ada-863f-60ec8dbe0636" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.008353 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.041239 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:16 crc kubenswrapper[4814]: E0216 09:48:16.041652 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:16.541635643 +0000 UTC m=+154.234791823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.143372 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:16 crc kubenswrapper[4814]: E0216 09:48:16.146163 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:16.646141222 +0000 UTC m=+154.339297402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.245758 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:16 crc kubenswrapper[4814]: E0216 09:48:16.247123 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:16.747092135 +0000 UTC m=+154.440248315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.346658 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:16 crc kubenswrapper[4814]: E0216 09:48:16.347207 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:16.847192585 +0000 UTC m=+154.540348765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.372922 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" podStartSLOduration=132.372904618 podStartE2EDuration="2m12.372904618s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:16.372336284 +0000 UTC m=+154.065492464" watchObservedRunningTime="2026-02-16 09:48:16.372904618 +0000 UTC m=+154.066060798" Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.448564 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4wrlr" podStartSLOduration=132.448526929 podStartE2EDuration="2m12.448526929s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:16.447639055 +0000 UTC m=+154.140795245" watchObservedRunningTime="2026-02-16 09:48:16.448526929 +0000 UTC m=+154.141683109" Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.453881 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:16 crc kubenswrapper[4814]: E0216 09:48:16.454223 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:16.95420905 +0000 UTC m=+154.647365230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.502058 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz"] Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.556071 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:16 crc kubenswrapper[4814]: E0216 09:48:16.556551 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:17.05651254 +0000 UTC m=+154.749668720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.561774 4814 csr.go:261] certificate signing request csr-7jxd4 is approved, waiting to be issued Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.571294 4814 csr.go:257] certificate signing request csr-7jxd4 is issued Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.651679 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" podStartSLOduration=132.651656449 podStartE2EDuration="2m12.651656449s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:16.649282955 +0000 UTC m=+154.342439145" watchObservedRunningTime="2026-02-16 09:48:16.651656449 +0000 UTC m=+154.344812629" Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.666059 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59"] Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.669354 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:16 crc kubenswrapper[4814]: E0216 09:48:16.669849 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:17.169830891 +0000 UTC m=+154.862987071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.751442 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" podStartSLOduration=131.751422591 podStartE2EDuration="2m11.751422591s" podCreationTimestamp="2026-02-16 09:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:16.751343969 +0000 UTC m=+154.444500149" watchObservedRunningTime="2026-02-16 09:48:16.751422591 +0000 UTC m=+154.444578771" Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.771265 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:16 crc kubenswrapper[4814]: E0216 09:48:16.771678 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:17.271657719 +0000 UTC m=+154.964813919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.872570 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:16 crc kubenswrapper[4814]: E0216 09:48:16.873343 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:17.373327201 +0000 UTC m=+155.066483381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.928432 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" podStartSLOduration=132.928413246 podStartE2EDuration="2m12.928413246s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:16.891261217 +0000 UTC m=+154.584417417" watchObservedRunningTime="2026-02-16 09:48:16.928413246 +0000 UTC m=+154.621569436" Feb 16 09:48:16 crc kubenswrapper[4814]: W0216 09:48:16.960813 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52fbaf98_06b3_4c96_8155_a94db62cdc56.slice/crio-c21a8a1c0352aa8041e4ffaf6ecc40ee47215f879a5ce046ea75221c3dff525f WatchSource:0}: Error finding container c21a8a1c0352aa8041e4ffaf6ecc40ee47215f879a5ce046ea75221c3dff525f: Status 404 returned error can't find the container with id c21a8a1c0352aa8041e4ffaf6ecc40ee47215f879a5ce046ea75221c3dff525f Feb 16 09:48:16 crc kubenswrapper[4814]: I0216 09:48:16.975565 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:16 crc kubenswrapper[4814]: E0216 09:48:16.976162 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:17.476142814 +0000 UTC m=+155.169298994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.007768 4814 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vv6v6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.007831 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" podUID="f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.060945 4814 generic.go:334] "Generic (PLEG): container finished" podID="a97a7dc8-4167-4e3d-b315-473d5adcfe1b" containerID="2819445687aafe06ddd6b2198a6a13cce5bd0e0a821c6c9802043cc2e5ec31da" exitCode=0 Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.083289 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:17 crc kubenswrapper[4814]: E0216 09:48:17.084492 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:17.584473643 +0000 UTC m=+155.277629823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.104843 4814 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-d67m2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.104896 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" podUID="9c0e0223-e440-4b15-8183-41940ec62701" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.117285 4814 patch_prober.go:28] interesting pod/downloads-7954f5f757-j5fnw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.117335 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j5fnw" podUID="5d9feb14-2511-4e1e-a78a-e737ae28770c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123671 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-swmkw"] Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123720 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123734 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-j5fnw" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123752 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7phc6"] Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123767 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" event={"ID":"9747dc3f-ea55-4af7-8561-eded508bd884","Type":"ContainerStarted","Data":"d92c12e72cbdf7baac4c598f5c8df1bf36be0986f47214ae1be48d5a6fba5155"} Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123787 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8nms9" event={"ID":"d3dffd75-c805-4e30-b870-fdc5fd583c91","Type":"ContainerStarted","Data":"30d22d2b563a024c4adb9cb2923086c40ddbe967136990d2ab08eb295c03b775"} Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123801 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" event={"ID":"a97a7dc8-4167-4e3d-b315-473d5adcfe1b","Type":"ContainerDied","Data":"2819445687aafe06ddd6b2198a6a13cce5bd0e0a821c6c9802043cc2e5ec31da"} Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123817 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" event={"ID":"30b7584a-b56a-4a3c-9f53-3d5105ae1c93","Type":"ContainerStarted","Data":"94f2e417ecd6562a77d388ee16ac9fffb6bf37d1606582f166d16a46364ed6ac"} Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123873 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" event={"ID":"09532720-4c09-46f9-9dc7-c3d201c74171","Type":"ContainerStarted","Data":"66ca8fdcfc45319fe0501d476c2869aee1a775f116589ec16e53e1e20913b1a7"} Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123911 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" event={"ID":"9c0e0223-e440-4b15-8183-41940ec62701","Type":"ContainerStarted","Data":"c770a6f1488ec33d836595d1791d3ccc84d5444cd97ab434cca791a6d598a63b"} Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123923 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-j5fnw" event={"ID":"5d9feb14-2511-4e1e-a78a-e737ae28770c","Type":"ContainerStarted","Data":"cbb91a94158426d2b9363bad752ba9ddc3ec62c25d19647e62c5e885298d2b4b"} Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123938 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" event={"ID":"52fbaf98-06b3-4c96-8155-a94db62cdc56","Type":"ContainerStarted","Data":"c21a8a1c0352aa8041e4ffaf6ecc40ee47215f879a5ce046ea75221c3dff525f"} Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.123948 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" event={"ID":"3ff0705e-83d4-4955-9a05-03dfec15075b","Type":"ContainerStarted","Data":"94bd4b2fab21640eda0eb3197b00c5f0d7f02cea9d9737ecf61cc228df461081"} Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.131503 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.185713 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:17 crc kubenswrapper[4814]: E0216 09:48:17.187273 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:17.687254366 +0000 UTC m=+155.380410546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.201900 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4p95d" podStartSLOduration=133.201882144 podStartE2EDuration="2m13.201882144s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:17.199962184 +0000 UTC m=+154.893118364" watchObservedRunningTime="2026-02-16 09:48:17.201882144 +0000 UTC m=+154.895038324" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.289018 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:17 crc kubenswrapper[4814]: E0216 09:48:17.289311 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:17.789299769 +0000 UTC m=+155.482455939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.336388 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4xwqr" podStartSLOduration=133.336366669 podStartE2EDuration="2m13.336366669s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:17.333136143 +0000 UTC m=+155.026292333" watchObservedRunningTime="2026-02-16 09:48:17.336366669 +0000 UTC m=+155.029522849" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.392198 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:17 crc kubenswrapper[4814]: E0216 09:48:17.392856 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:17.892834641 +0000 UTC m=+155.585990821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.493892 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:17 crc kubenswrapper[4814]: E0216 09:48:17.494752 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:17.994733109 +0000 UTC m=+155.687889289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.571825 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gfngr" podStartSLOduration=133.571808508 podStartE2EDuration="2m13.571808508s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:17.5417691 +0000 UTC m=+155.234925280" watchObservedRunningTime="2026-02-16 09:48:17.571808508 +0000 UTC m=+155.264964688" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.573211 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 09:43:16 +0000 UTC, rotation deadline is 2026-10-30 06:15:53.035428719 +0000 UTC Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.573286 4814 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6140h27m35.462145552s for next certificate rotation Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.598162 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:17 crc kubenswrapper[4814]: E0216 09:48:17.598668 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:18.098648631 +0000 UTC m=+155.791804811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.608432 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gss2f" podStartSLOduration=133.60840447 podStartE2EDuration="2m13.60840447s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:17.57188296 +0000 UTC m=+155.265039160" watchObservedRunningTime="2026-02-16 09:48:17.60840447 +0000 UTC m=+155.301560650" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.690547 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kmk4b" podStartSLOduration=5.690516264 podStartE2EDuration="5.690516264s" podCreationTimestamp="2026-02-16 09:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:17.689747043 +0000 UTC m=+155.382903223" watchObservedRunningTime="2026-02-16 09:48:17.690516264 +0000 UTC m=+155.383672444" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.699595 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:17 crc kubenswrapper[4814]: E0216 09:48:17.699950 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:18.199938374 +0000 UTC m=+155.893094554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.742027 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9"] Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.742266 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" podStartSLOduration=132.742240468 podStartE2EDuration="2m12.742240468s" podCreationTimestamp="2026-02-16 09:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:17.728294937 +0000 UTC m=+155.421451117" watchObservedRunningTime="2026-02-16 09:48:17.742240468 +0000 UTC m=+155.435396648" Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.766051 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq"] Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.800377 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:17 crc kubenswrapper[4814]: E0216 09:48:17.802645 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:18.302621794 +0000 UTC m=+155.995777974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.886631 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2"] Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.892448 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt"] Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.895928 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d6q92"] Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.906681 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:17 crc kubenswrapper[4814]: E0216 09:48:17.907153 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:18.407135291 +0000 UTC m=+156.100291471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:17 crc kubenswrapper[4814]: W0216 09:48:17.931547 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5b89f0c_c038_4eec_8942_bf236eb9ead0.slice/crio-48993d723b3ab723e543e298aa62855caf4c285b58471a9976f454870ee727c3 WatchSource:0}: Error finding container 48993d723b3ab723e543e298aa62855caf4c285b58471a9976f454870ee727c3: Status 404 returned error can't find the container with id 48993d723b3ab723e543e298aa62855caf4c285b58471a9976f454870ee727c3 Feb 16 09:48:17 crc kubenswrapper[4814]: I0216 09:48:17.960140 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-j5fnw" podStartSLOduration=133.96012189 podStartE2EDuration="2m13.96012189s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:17.959918745 +0000 UTC m=+155.653074925" watchObservedRunningTime="2026-02-16 09:48:17.96012189 +0000 UTC m=+155.653278070" Feb 16 09:48:17 crc kubenswrapper[4814]: W0216 09:48:17.989586 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91c725e3_26cb_474c_a672_d76cdda6a5de.slice/crio-c9d9411cb3d8488ad73722bf61b59689a808bcdfd7dcfe299753306c119e10ae WatchSource:0}: Error finding container c9d9411cb3d8488ad73722bf61b59689a808bcdfd7dcfe299753306c119e10ae: Status 404 returned error can't find the container with id c9d9411cb3d8488ad73722bf61b59689a808bcdfd7dcfe299753306c119e10ae Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.003482 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f"] Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.013197 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:18 crc kubenswrapper[4814]: E0216 09:48:18.013613 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:18.513596751 +0000 UTC m=+156.206752931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.017445 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z"] Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.026784 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-b9r5t"] Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.045851 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-27xhf"] Feb 16 09:48:18 crc kubenswrapper[4814]: W0216 09:48:18.053658 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod028cc490_1e41_4efa_b193_42ff552e7a15.slice/crio-9ac1ac309a0a0a8d60c59b76b6a84fb6d5eff65b1442907cddf5e94e06528be8 WatchSource:0}: Error finding container 9ac1ac309a0a0a8d60c59b76b6a84fb6d5eff65b1442907cddf5e94e06528be8: Status 404 returned error can't find the container with id 9ac1ac309a0a0a8d60c59b76b6a84fb6d5eff65b1442907cddf5e94e06528be8 Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.056472 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4"] Feb 16 09:48:18 crc kubenswrapper[4814]: W0216 09:48:18.082595 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01bc9817_c469_4d6e_a6cc_cd0463962993.slice/crio-f401e2b17b363b31f20b96070e8d2f6fb7f9d3eb415cd0f1782d8c196bcf181c WatchSource:0}: Error finding container f401e2b17b363b31f20b96070e8d2f6fb7f9d3eb415cd0f1782d8c196bcf181c: Status 404 returned error can't find the container with id f401e2b17b363b31f20b96070e8d2f6fb7f9d3eb415cd0f1782d8c196bcf181c Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.088674 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4"] Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.093727 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn"] Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.105996 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx"] Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.106044 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf"] Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.115443 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74"] Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.115917 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cr82j"] Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.116210 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:18 crc kubenswrapper[4814]: W0216 09:48:18.116438 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86284abf_f706_432c_871d_5742dca5966b.slice/crio-07596579105b789e8c4f39462c8b9bdbb3fc3377e3cf9964d09ef10cbcd0a6d0 WatchSource:0}: Error finding container 07596579105b789e8c4f39462c8b9bdbb3fc3377e3cf9964d09ef10cbcd0a6d0: Status 404 returned error can't find the container with id 07596579105b789e8c4f39462c8b9bdbb3fc3377e3cf9964d09ef10cbcd0a6d0 Feb 16 09:48:18 crc kubenswrapper[4814]: E0216 09:48:18.116660 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:18.616642761 +0000 UTC m=+156.309798941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.181361 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" event={"ID":"6c22d3e2-5990-4295-804d-318a7321bc22","Type":"ContainerStarted","Data":"ff2cf547cbfb9bb74589d530a7aa53a1339651d59461ce32404294bb37c67086"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.181420 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" event={"ID":"6c22d3e2-5990-4295-804d-318a7321bc22","Type":"ContainerStarted","Data":"dda0dad6bfc1ae57392a17cb16398ab66b034c815a56407cf71419d95d63f8ca"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.201729 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" event={"ID":"bdff7274-020d-47de-a573-391747c777ac","Type":"ContainerStarted","Data":"af2b7730dc24c2c3ac3ab86b738f8457616de5808c92e8ce22575be559ef5a5e"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.220443 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:18 crc kubenswrapper[4814]: E0216 09:48:18.222453 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:18.722431363 +0000 UTC m=+156.415587543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.234316 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" event={"ID":"c5b89f0c-c038-4eec-8942-bf236eb9ead0","Type":"ContainerStarted","Data":"48993d723b3ab723e543e298aa62855caf4c285b58471a9976f454870ee727c3"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.241583 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" event={"ID":"30b7584a-b56a-4a3c-9f53-3d5105ae1c93","Type":"ContainerStarted","Data":"b8857f8ceb473357c36c819d287dce95add3013444dd68aefad75075107a2fcf"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.274759 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9" event={"ID":"20ff0b7b-b891-4fd4-bc71-2eb422f1fa9e","Type":"ContainerStarted","Data":"fe41e7a270fef6a66b7db2d4fb32d61ebb3cf7788354b6f3aeb27ebadc0bf6ef"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.274861 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9" event={"ID":"20ff0b7b-b891-4fd4-bc71-2eb422f1fa9e","Type":"ContainerStarted","Data":"224b41b954e38604db3374c82556efa99655624020049e184669281e8e7c52de"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.288247 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" event={"ID":"8045f9a7-b013-41be-9aef-270522765538","Type":"ContainerStarted","Data":"6170efd01ce1ef43465ed50d2a1be8ed4ee161d3b6a28f060737c088ca19699a"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.289072 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" event={"ID":"8045f9a7-b013-41be-9aef-270522765538","Type":"ContainerStarted","Data":"bbea29d6ad72e381844718ddbfa3cb96bfa92b9938f2dee9e2b80e3eeb9456b6"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.299108 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-swmkw" podStartSLOduration=134.29907657 podStartE2EDuration="2m14.29907657s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.223719537 +0000 UTC m=+155.916875727" watchObservedRunningTime="2026-02-16 09:48:18.29907657 +0000 UTC m=+155.992232750" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.304332 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-9kljz" event={"ID":"c6caef89-a08c-46ec-b2c8-af0f2b795b02","Type":"ContainerStarted","Data":"262436696b1c061f4014e8bf4a27c38caffdfc4cc164c9e262adf7882429f075"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.320959 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-b9r5t" event={"ID":"028cc490-1e41-4efa-b193-42ff552e7a15","Type":"ContainerStarted","Data":"9ac1ac309a0a0a8d60c59b76b6a84fb6d5eff65b1442907cddf5e94e06528be8"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.321822 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:18 crc kubenswrapper[4814]: E0216 09:48:18.324151 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:18.824131306 +0000 UTC m=+156.517287486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.336158 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5kfsf" podStartSLOduration=134.336141755 podStartE2EDuration="2m14.336141755s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.298236828 +0000 UTC m=+155.991393008" watchObservedRunningTime="2026-02-16 09:48:18.336141755 +0000 UTC m=+156.029297935" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.341752 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" event={"ID":"01bc9817-c469-4d6e-a6cc-cd0463962993","Type":"ContainerStarted","Data":"f401e2b17b363b31f20b96070e8d2f6fb7f9d3eb415cd0f1782d8c196bcf181c"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.359335 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7phc6" event={"ID":"da041f6a-ea31-4e70-b695-aa0fd7e0ce85","Type":"ContainerStarted","Data":"142258249d031adbb07ec49cc79aa8b1f3128caa19283be26d87d57884623cc2"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.361603 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" event={"ID":"09532720-4c09-46f9-9dc7-c3d201c74171","Type":"ContainerStarted","Data":"647c0089c2808937bbfd579f78c78726a91d1cadfd768bc0a15693154f51639d"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.371323 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ltl59" podStartSLOduration=134.37129547 podStartE2EDuration="2m14.37129547s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.338026195 +0000 UTC m=+156.031182375" watchObservedRunningTime="2026-02-16 09:48:18.37129547 +0000 UTC m=+156.064451650" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.376152 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" event={"ID":"ae79d44f-eef6-42b4-bd2b-50b9faece115","Type":"ContainerStarted","Data":"cf99f0ef0213c9accd23aadfd6d625bf13d1c0bd6683ee7de489d6c0b10ceba3"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.386299 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8nms9" event={"ID":"d3dffd75-c805-4e30-b870-fdc5fd583c91","Type":"ContainerStarted","Data":"27ec34c4669fa0602dedeeb2d43536ca5839c547f22b653b64eaf8034d14d187"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.391992 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8x9jr" podStartSLOduration=134.391970929 podStartE2EDuration="2m14.391970929s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.391716912 +0000 UTC m=+156.084873102" watchObservedRunningTime="2026-02-16 09:48:18.391970929 +0000 UTC m=+156.085127109" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.393329 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-9kljz" podStartSLOduration=134.393321176 podStartE2EDuration="2m14.393321176s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.370119578 +0000 UTC m=+156.063275758" watchObservedRunningTime="2026-02-16 09:48:18.393321176 +0000 UTC m=+156.086477356" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.399409 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" event={"ID":"d131f500-cc08-4500-802b-9c7ccb8f5457","Type":"ContainerStarted","Data":"c29040b0d7d76c6284a2116d7045716289c153f2e44d0162bc547961bdcb7cd2"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.419323 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8nms9" podStartSLOduration=6.419304286 podStartE2EDuration="6.419304286s" podCreationTimestamp="2026-02-16 09:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.418565007 +0000 UTC m=+156.111721207" watchObservedRunningTime="2026-02-16 09:48:18.419304286 +0000 UTC m=+156.112460466" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.423196 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:18 crc kubenswrapper[4814]: E0216 09:48:18.423403 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:18.923372724 +0000 UTC m=+156.616528904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.423511 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:18 crc kubenswrapper[4814]: E0216 09:48:18.423950 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:18.923932329 +0000 UTC m=+156.617088509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.430787 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" event={"ID":"e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2","Type":"ContainerStarted","Data":"c2ba723a4f1feb4bfa6b0769096e80b2abd9806a60b08e95afdf6f11fc2248e7"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.433082 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.435739 4814 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4hzfq container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.435799 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" podUID="e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.439320 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" event={"ID":"86284abf-f706-432c-871d-5742dca5966b","Type":"ContainerStarted","Data":"07596579105b789e8c4f39462c8b9bdbb3fc3377e3cf9964d09ef10cbcd0a6d0"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.464184 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" podStartSLOduration=133.464164748 podStartE2EDuration="2m13.464164748s" podCreationTimestamp="2026-02-16 09:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.462913766 +0000 UTC m=+156.156069946" watchObservedRunningTime="2026-02-16 09:48:18.464164748 +0000 UTC m=+156.157320918" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.473906 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" event={"ID":"3ff0705e-83d4-4955-9a05-03dfec15075b","Type":"ContainerStarted","Data":"32d789246d0155abe75838ded3881de41b90e4a6cc9ed32e87e49ea2e5bb8e30"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.491714 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.496451 4814 patch_prober.go:28] interesting pod/router-default-5444994796-9kljz container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.496512 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kljz" podUID="c6caef89-a08c-46ec-b2c8-af0f2b795b02" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.498254 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-28lnk" podStartSLOduration=134.498240714 podStartE2EDuration="2m14.498240714s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.495976534 +0000 UTC m=+156.189132724" watchObservedRunningTime="2026-02-16 09:48:18.498240714 +0000 UTC m=+156.191396894" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.522989 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" event={"ID":"a97a7dc8-4167-4e3d-b315-473d5adcfe1b","Type":"ContainerStarted","Data":"5d5cb59f04ce0ce0f30ab00f0c3dc9a9856e918c2a65b29938a2dd493dfafe8c"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.524016 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.524373 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:18 crc kubenswrapper[4814]: E0216 09:48:18.525194 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:19.025178461 +0000 UTC m=+156.718334641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.557714 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" podStartSLOduration=134.557689704 podStartE2EDuration="2m14.557689704s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.556968325 +0000 UTC m=+156.250124505" watchObservedRunningTime="2026-02-16 09:48:18.557689704 +0000 UTC m=+156.250845874" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.566399 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" event={"ID":"52fbaf98-06b3-4c96-8155-a94db62cdc56","Type":"ContainerStarted","Data":"5e44ceabe572c3acfa8748ed0dae00c77b7b14c1a8bf09b861fce296f5015ecc"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.571691 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" event={"ID":"ae986417-8048-44d4-b110-6bbe3ab2ce7e","Type":"ContainerStarted","Data":"3f45353e6e22e611ed23736f58bcace2f2d54f096a47742216d5e7dc547b0fc0"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.583519 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" event={"ID":"3f2423fe-728b-4236-9d03-04e3472c915e","Type":"ContainerStarted","Data":"ac9377c9e1d3eb5f5bfe72dfe734e97cdf60ed98346e9fce8f8f9c49afd12fa5"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.605721 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" event={"ID":"9747dc3f-ea55-4af7-8561-eded508bd884","Type":"ContainerStarted","Data":"bbca6a87bf66f9c84fc30c67845de1c2a55d2224156f89ed5dd1a0a8b154b323"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.625503 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:18 crc kubenswrapper[4814]: E0216 09:48:18.625852 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:19.125839616 +0000 UTC m=+156.818995796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.653800 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" podStartSLOduration=134.653778119 podStartE2EDuration="2m14.653778119s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.625088836 +0000 UTC m=+156.318245016" watchObservedRunningTime="2026-02-16 09:48:18.653778119 +0000 UTC m=+156.346934299" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.660791 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" event={"ID":"ead6a0b3-8183-4435-96ea-77026e4d9cf0","Type":"ContainerStarted","Data":"ba3872b441e4d658790af6b69ec9ebaa7b7dde6981da4270f635b751ec33f819"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.708930 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hbcgg" podStartSLOduration=134.708908384 podStartE2EDuration="2m14.708908384s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.656065479 +0000 UTC m=+156.349221669" watchObservedRunningTime="2026-02-16 09:48:18.708908384 +0000 UTC m=+156.402064564" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.725881 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:18 crc kubenswrapper[4814]: E0216 09:48:18.726143 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:19.226128831 +0000 UTC m=+156.919285011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.736620 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" event={"ID":"91c725e3-26cb-474c-a672-d76cdda6a5de","Type":"ContainerStarted","Data":"c9d9411cb3d8488ad73722bf61b59689a808bcdfd7dcfe299753306c119e10ae"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.748297 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" event={"ID":"73fb725a-9a40-4283-8e3e-296294a08655","Type":"ContainerStarted","Data":"a03b724a55a716133c0ecb49883c0d7f9d5b93753e069884589b86c15b0802c6"} Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.748888 4814 patch_prober.go:28] interesting pod/downloads-7954f5f757-j5fnw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.748927 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j5fnw" podUID="5d9feb14-2511-4e1e-a78a-e737ae28770c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.786683 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-drcdg" podStartSLOduration=134.786652191 podStartE2EDuration="2m14.786652191s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.780850616 +0000 UTC m=+156.474006796" watchObservedRunningTime="2026-02-16 09:48:18.786652191 +0000 UTC m=+156.479808381" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.788976 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-p27f4" podStartSLOduration=133.788964442 podStartE2EDuration="2m13.788964442s" podCreationTimestamp="2026-02-16 09:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:18.712379387 +0000 UTC m=+156.405535587" watchObservedRunningTime="2026-02-16 09:48:18.788964442 +0000 UTC m=+156.482120612" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.831813 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:18 crc kubenswrapper[4814]: E0216 09:48:18.832375 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:19.332358635 +0000 UTC m=+157.025514815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.933341 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:18 crc kubenswrapper[4814]: E0216 09:48:18.934394 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:19.434378158 +0000 UTC m=+157.127534338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.937979 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.939501 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:18 crc kubenswrapper[4814]: I0216 09:48:18.979904 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.039802 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.039973 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.040006 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:19 crc kubenswrapper[4814]: E0216 09:48:19.052127 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:19.552110877 +0000 UTC m=+157.245267057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.141452 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:19 crc kubenswrapper[4814]: E0216 09:48:19.142763 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:19.642232053 +0000 UTC m=+157.335388233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.244625 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:19 crc kubenswrapper[4814]: E0216 09:48:19.246263 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:19.746246147 +0000 UTC m=+157.439402327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.351173 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:19 crc kubenswrapper[4814]: E0216 09:48:19.351478 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:19.851461665 +0000 UTC m=+157.544617845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.452880 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:19 crc kubenswrapper[4814]: E0216 09:48:19.453848 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:19.953829775 +0000 UTC m=+157.646985955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.518493 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.524233 4814 patch_prober.go:28] interesting pod/router-default-5444994796-9kljz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 09:48:19 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Feb 16 09:48:19 crc kubenswrapper[4814]: [+]process-running ok Feb 16 09:48:19 crc kubenswrapper[4814]: healthz check failed Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.524285 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kljz" podUID="c6caef89-a08c-46ec-b2c8-af0f2b795b02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.554067 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:19 crc kubenswrapper[4814]: E0216 09:48:19.554424 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:20.054408769 +0000 UTC m=+157.747564949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.663400 4814 patch_prober.go:28] interesting pod/apiserver-76f77b778f-fsxcr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 09:48:19 crc kubenswrapper[4814]: [+]log ok Feb 16 09:48:19 crc kubenswrapper[4814]: [+]etcd ok Feb 16 09:48:19 crc kubenswrapper[4814]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 09:48:19 crc kubenswrapper[4814]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 09:48:19 crc kubenswrapper[4814]: [+]poststarthook/max-in-flight-filter ok Feb 16 09:48:19 crc kubenswrapper[4814]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 09:48:19 crc kubenswrapper[4814]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 09:48:19 crc kubenswrapper[4814]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 09:48:19 crc kubenswrapper[4814]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 16 09:48:19 crc kubenswrapper[4814]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 09:48:19 crc kubenswrapper[4814]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 09:48:19 crc kubenswrapper[4814]: [+]poststarthook/openshift.io-startinformers ok Feb 16 09:48:19 crc kubenswrapper[4814]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 09:48:19 crc kubenswrapper[4814]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 09:48:19 crc kubenswrapper[4814]: livez check failed Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.663950 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" podUID="c0bc3cf9-b28c-4673-8b2c-ce1d2616ec98" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.663573 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:19 crc kubenswrapper[4814]: E0216 09:48:19.663858 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:20.163844648 +0000 UTC m=+157.857000828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.765124 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:19 crc kubenswrapper[4814]: E0216 09:48:19.765570 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:20.265519901 +0000 UTC m=+157.958676091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.768971 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" event={"ID":"bdff7274-020d-47de-a573-391747c777ac","Type":"ContainerStarted","Data":"18b10985b90bf22e1ee66d214b9714bc1591c4b6670f34e21f6f002055271c8c"} Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.774216 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" event={"ID":"c5b89f0c-c038-4eec-8942-bf236eb9ead0","Type":"ContainerStarted","Data":"88043a6e7a89cd0f617a8ed3d4f091639930bedaa13aa780e505e8b06d09adbc"} Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.776735 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" event={"ID":"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee","Type":"ContainerStarted","Data":"67c558268fc495fa900056dc45d922297be9c4534d24c684b73eb7ff6ae821cd"} Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.776831 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" event={"ID":"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee","Type":"ContainerStarted","Data":"7cc44e88b211f6cc8f071bea72de693b985a03480d73649ea54abfa8a8c0f94f"} Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.780484 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.795756 4814 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cr82j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.795845 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.796001 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" event={"ID":"01bc9817-c469-4d6e-a6cc-cd0463962993","Type":"ContainerStarted","Data":"f1f9dec6b830717e19b4a7a4b0c9ff54650667201a4f8c5ddc42a2e6dc7706f7"} Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.813617 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" podStartSLOduration=135.813601209 podStartE2EDuration="2m15.813601209s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:19.813490616 +0000 UTC m=+157.506646796" watchObservedRunningTime="2026-02-16 09:48:19.813601209 +0000 UTC m=+157.506757389" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.821634 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" event={"ID":"e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2","Type":"ContainerStarted","Data":"f640a73eb0e32c094161f853dcd76ff71c929bd29aa81aba7a097b2c7bd87950"} Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.822289 4814 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4hzfq container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.825841 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" podUID="e2fed6ee-b3ed-4c06-bfbb-4c0957c064b2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.873979 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:19 crc kubenswrapper[4814]: E0216 09:48:19.877718 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:20.377691953 +0000 UTC m=+158.070848133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.901820 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" event={"ID":"ead6a0b3-8183-4435-96ea-77026e4d9cf0","Type":"ContainerStarted","Data":"23c88d53cf2ab3ea3cd19b8120ec219e0382dd41d63850624b411743dfd26e87"} Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.928616 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7phc6" event={"ID":"da041f6a-ea31-4e70-b695-aa0fd7e0ce85","Type":"ContainerStarted","Data":"fd44b133f8abb98fb48158773039cbfb4718374500fa6817bc2824b001a651bd"} Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.936904 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rwdqt" podStartSLOduration=134.936887656 podStartE2EDuration="2m14.936887656s" podCreationTimestamp="2026-02-16 09:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:19.934906903 +0000 UTC m=+157.628063083" watchObservedRunningTime="2026-02-16 09:48:19.936887656 +0000 UTC m=+157.630043836" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.937856 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gsf8f" podStartSLOduration=135.937851772 podStartE2EDuration="2m15.937851772s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:19.869477615 +0000 UTC m=+157.562633795" watchObservedRunningTime="2026-02-16 09:48:19.937851772 +0000 UTC m=+157.631007952" Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.975107 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" event={"ID":"91c725e3-26cb-474c-a672-d76cdda6a5de","Type":"ContainerStarted","Data":"e6ae0facd35169f1be2910ca43e73fa1ce07c1bc42da2e3fb4c733bdf16ba74f"} Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.975171 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" event={"ID":"91c725e3-26cb-474c-a672-d76cdda6a5de","Type":"ContainerStarted","Data":"e788252d3ef73f0c4089e4fb0ae55de5b4b3ff9406f70367e696c22d62495d6a"} Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.975214 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:19 crc kubenswrapper[4814]: E0216 09:48:19.975819 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:20.475790881 +0000 UTC m=+158.168947221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:19 crc kubenswrapper[4814]: I0216 09:48:19.984873 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" event={"ID":"ae79d44f-eef6-42b4-bd2b-50b9faece115","Type":"ContainerStarted","Data":"ad6034050e134cc4faffa4c5cde1d6dd8ea79a3b8d5f1be70c99e9989ad7b634"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.021908 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxnq2" podStartSLOduration=136.021878905 podStartE2EDuration="2m16.021878905s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:20.019944883 +0000 UTC m=+157.713101073" watchObservedRunningTime="2026-02-16 09:48:20.021878905 +0000 UTC m=+157.715035095" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.037233 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bbbtz" event={"ID":"52fbaf98-06b3-4c96-8155-a94db62cdc56","Type":"ContainerStarted","Data":"4fa67c06ac8ac8e16c6b817d6bbf4f627ae15c7869f03c4645ed12e885561554"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.057301 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-b9r5t" event={"ID":"028cc490-1e41-4efa-b193-42ff552e7a15","Type":"ContainerStarted","Data":"33d13e8c43b7ff78fac0a6bae2df7d73829265e194e149182a30f38c49333e62"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.077582 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" event={"ID":"ae986417-8048-44d4-b110-6bbe3ab2ce7e","Type":"ContainerStarted","Data":"08affdcacf6cc848748db53649081505a67f3074535acff963c70eb4ddfc8492"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.078935 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.080133 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.080923 4814 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4hzv4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.080968 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" podUID="ae986417-8048-44d4-b110-6bbe3ab2ce7e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 16 09:48:20 crc kubenswrapper[4814]: E0216 09:48:20.081571 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:20.581552051 +0000 UTC m=+158.274708231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.112605 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9" event={"ID":"20ff0b7b-b891-4fd4-bc71-2eb422f1fa9e","Type":"ContainerStarted","Data":"d7952d0cd6bd2fd2786170bada30a09fa4e5a77081be7037be019d395ef1410a"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.122903 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" podStartSLOduration=135.12288179 podStartE2EDuration="2m15.12288179s" podCreationTimestamp="2026-02-16 09:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:20.122245893 +0000 UTC m=+157.815402083" watchObservedRunningTime="2026-02-16 09:48:20.12288179 +0000 UTC m=+157.816037970" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.123515 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" podStartSLOduration=136.123508406 podStartE2EDuration="2m16.123508406s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:20.069355677 +0000 UTC m=+157.762511867" watchObservedRunningTime="2026-02-16 09:48:20.123508406 +0000 UTC m=+157.816664586" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.125636 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" event={"ID":"26a661b7-74ed-490d-8003-bc3c6e7d8c4e","Type":"ContainerStarted","Data":"7b4a14b72663e44c53192ee5f2b1c349bea05f6fa6f26f08e43e0315c140737a"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.125707 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" event={"ID":"26a661b7-74ed-490d-8003-bc3c6e7d8c4e","Type":"ContainerStarted","Data":"3f1bc767f7949b09d74ddba69bfe0a99c012f16c27deae5db2c73b938f05ed63"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.126047 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.148362 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mqgk9" podStartSLOduration=136.148338167 podStartE2EDuration="2m16.148338167s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:20.148043979 +0000 UTC m=+157.841200159" watchObservedRunningTime="2026-02-16 09:48:20.148338167 +0000 UTC m=+157.841494347" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.150340 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" event={"ID":"90271beb-156c-4e46-9965-b2d169d7cb67","Type":"ContainerStarted","Data":"35ca8e766fae889702942b8473e2bc7e88746e121e583f149ad8060bfe58a944"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.150387 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" event={"ID":"90271beb-156c-4e46-9965-b2d169d7cb67","Type":"ContainerStarted","Data":"44f769ebc394836b269ad7dc95d635fce7e6a50f00fff78b498e109adb2b7233"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.151265 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.152274 4814 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xdw74 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.152320 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" podUID="90271beb-156c-4e46-9965-b2d169d7cb67" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.163872 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" event={"ID":"0956f442-216c-4be4-9c81-efcb02614c3f","Type":"ContainerStarted","Data":"78e3723e555a981dad911674b869967070890f3bb28d8577e639b1578390c115"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.163925 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" event={"ID":"0956f442-216c-4be4-9c81-efcb02614c3f","Type":"ContainerStarted","Data":"d5ea3bee0a7c29ec286eca7e6405924ee22bc21ac9ab87cd5884a11c6ad096c8"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.180400 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" event={"ID":"d131f500-cc08-4500-802b-9c7ccb8f5457","Type":"ContainerStarted","Data":"d7f1975286cd882751c1df37670bbccbc40cdc9a6fd25f9ce955e2315026ff8e"} Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.181328 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.181350 4814 patch_prober.go:28] interesting pod/downloads-7954f5f757-j5fnw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.181401 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j5fnw" podUID="5d9feb14-2511-4e1e-a78a-e737ae28770c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 09:48:20 crc kubenswrapper[4814]: E0216 09:48:20.182743 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:20.68272317 +0000 UTC m=+158.375879350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.202795 4814 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6mqrk container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.202852 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" podUID="a97a7dc8-4167-4e3d-b315-473d5adcfe1b" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.211953 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7vx4m" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.229580 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" podStartSLOduration=135.229554945 podStartE2EDuration="2m15.229554945s" podCreationTimestamp="2026-02-16 09:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:20.202097925 +0000 UTC m=+157.895254105" watchObservedRunningTime="2026-02-16 09:48:20.229554945 +0000 UTC m=+157.922711135" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.251932 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" podStartSLOduration=135.25191385 podStartE2EDuration="2m15.25191385s" podCreationTimestamp="2026-02-16 09:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:20.24628397 +0000 UTC m=+157.939440160" watchObservedRunningTime="2026-02-16 09:48:20.25191385 +0000 UTC m=+157.945070030" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.288766 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:20 crc kubenswrapper[4814]: E0216 09:48:20.290914 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:20.790894666 +0000 UTC m=+158.484050846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.301905 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gjjfn" podStartSLOduration=136.301878208 podStartE2EDuration="2m16.301878208s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:20.270887964 +0000 UTC m=+157.964044144" watchObservedRunningTime="2026-02-16 09:48:20.301878208 +0000 UTC m=+157.995034388" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.344483 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6z5lx" podStartSLOduration=136.34446344 podStartE2EDuration="2m16.34446344s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:20.303589114 +0000 UTC m=+157.996745294" watchObservedRunningTime="2026-02-16 09:48:20.34446344 +0000 UTC m=+158.037619620" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.389920 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:20 crc kubenswrapper[4814]: E0216 09:48:20.390311 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:20.890293469 +0000 UTC m=+158.583449649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.491865 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:20 crc kubenswrapper[4814]: E0216 09:48:20.492232 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:20.992218028 +0000 UTC m=+158.685374208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.496668 4814 patch_prober.go:28] interesting pod/router-default-5444994796-9kljz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 09:48:20 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Feb 16 09:48:20 crc kubenswrapper[4814]: [+]process-running ok Feb 16 09:48:20 crc kubenswrapper[4814]: healthz check failed Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.496731 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kljz" podUID="c6caef89-a08c-46ec-b2c8-af0f2b795b02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.592449 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:20 crc kubenswrapper[4814]: E0216 09:48:20.592940 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:21.092923665 +0000 UTC m=+158.786079845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.694057 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:20 crc kubenswrapper[4814]: E0216 09:48:20.694346 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:21.19433429 +0000 UTC m=+158.887490470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.794896 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:20 crc kubenswrapper[4814]: E0216 09:48:20.795264 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:21.295249023 +0000 UTC m=+158.988405203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.896214 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:20 crc kubenswrapper[4814]: E0216 09:48:20.896674 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:21.396653638 +0000 UTC m=+159.089809818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.996845 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:20 crc kubenswrapper[4814]: E0216 09:48:20.997044 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:21.497021606 +0000 UTC m=+159.190177786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:20 crc kubenswrapper[4814]: I0216 09:48:20.997145 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:20 crc kubenswrapper[4814]: E0216 09:48:20.997486 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:21.497477188 +0000 UTC m=+159.190633368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.093736 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6mqrk" Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.098341 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.098511 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:21.598490663 +0000 UTC m=+159.291646843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.098675 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.099007 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:21.598995127 +0000 UTC m=+159.292151307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.189785 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" event={"ID":"86284abf-f706-432c-871d-5742dca5966b","Type":"ContainerStarted","Data":"5e86bc4479dd993deb51e98280eef22df34eb4605115499d6b30d91a45b0b763"} Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.199412 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.200140 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:21.700116885 +0000 UTC m=+159.393273055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.220638 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" event={"ID":"bdff7274-020d-47de-a573-391747c777ac","Type":"ContainerStarted","Data":"81269da3126fc6ed0ed6ea37ccf868e31ea623438aa6c4bfc36645ff70340ced"} Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.232692 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" event={"ID":"c5b89f0c-c038-4eec-8942-bf236eb9ead0","Type":"ContainerStarted","Data":"e802b64633bc4e2323c5231f321f7032d2099bb1ef2455de953f241284a04b10"} Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.250823 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-b9r5t" event={"ID":"028cc490-1e41-4efa-b193-42ff552e7a15","Type":"ContainerStarted","Data":"86c8da6079107cd25133a6bb8a8deac664028cf228c6cf10fce03c328d026c7a"} Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.251497 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.275840 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-m9fp4" podStartSLOduration=137.275825408 podStartE2EDuration="2m17.275825408s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:21.236100871 +0000 UTC m=+158.929257041" watchObservedRunningTime="2026-02-16 09:48:21.275825408 +0000 UTC m=+158.968981588" Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.276696 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-27xhf" podStartSLOduration=137.27669077 podStartE2EDuration="2m17.27669077s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:21.273914967 +0000 UTC m=+158.967071147" watchObservedRunningTime="2026-02-16 09:48:21.27669077 +0000 UTC m=+158.969846950" Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.306300 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.309326 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:21.809312358 +0000 UTC m=+159.502468538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.315747 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-d6q92" podStartSLOduration=137.315732328 podStartE2EDuration="2m17.315732328s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:21.312963835 +0000 UTC m=+159.006120015" watchObservedRunningTime="2026-02-16 09:48:21.315732328 +0000 UTC m=+159.008888508" Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.330436 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7phc6" event={"ID":"da041f6a-ea31-4e70-b695-aa0fd7e0ce85","Type":"ContainerStarted","Data":"86de5738fa3d66c57ead4bf1e67bc404c971787f6d7f519e46377f07034b0c5d"} Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.374394 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" event={"ID":"26a661b7-74ed-490d-8003-bc3c6e7d8c4e","Type":"ContainerStarted","Data":"0c0ac4f9c04c7db0cd24cd4c0e22afac2058b5734a8bf65a103333d85b9b1f06"} Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.379413 4814 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cr82j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.379454 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.413296 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.415091 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:21.915057618 +0000 UTC m=+159.608213798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.430522 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4hzv4" Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.432169 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4hzfq" Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.472404 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-b9r5t" podStartSLOduration=9.472384552 podStartE2EDuration="9.472384552s" podCreationTimestamp="2026-02-16 09:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:21.379305278 +0000 UTC m=+159.072461458" watchObservedRunningTime="2026-02-16 09:48:21.472384552 +0000 UTC m=+159.165540722" Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.504514 4814 patch_prober.go:28] interesting pod/router-default-5444994796-9kljz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 09:48:21 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Feb 16 09:48:21 crc kubenswrapper[4814]: [+]process-running ok Feb 16 09:48:21 crc kubenswrapper[4814]: healthz check failed Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.504625 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kljz" podUID="c6caef89-a08c-46ec-b2c8-af0f2b795b02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.518772 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.519147 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.019131494 +0000 UTC m=+159.712287674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.620174 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.620430 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.120384177 +0000 UTC m=+159.813540357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.620510 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.620870 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.120854829 +0000 UTC m=+159.814011009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.721986 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.722221 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.222183762 +0000 UTC m=+159.915339942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.722291 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.722749 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.222738087 +0000 UTC m=+159.915894437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.823772 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.823960 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.323933517 +0000 UTC m=+160.017089697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.824099 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.824423 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.324414709 +0000 UTC m=+160.017570879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.934715 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.935523 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.435497192 +0000 UTC m=+160.128653372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:21 crc kubenswrapper[4814]: I0216 09:48:21.935624 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:21 crc kubenswrapper[4814]: E0216 09:48:21.936063 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.436045357 +0000 UTC m=+160.129201707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.037256 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.037515 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.537476964 +0000 UTC m=+160.230633144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.038001 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.038386 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.538370217 +0000 UTC m=+160.231526387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.138862 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.139127 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.639092225 +0000 UTC m=+160.332248405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.139374 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.139751 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.639735342 +0000 UTC m=+160.332891512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.240225 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.240377 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.740352176 +0000 UTC m=+160.433508356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.240693 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.241019 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.741011964 +0000 UTC m=+160.434168144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.256126 4814 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.273150 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xdw74" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.342247 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.342435 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.842408169 +0000 UTC m=+160.535564349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.342497 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.342819 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.84280629 +0000 UTC m=+160.535962470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.381135 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7phc6" event={"ID":"da041f6a-ea31-4e70-b695-aa0fd7e0ce85","Type":"ContainerStarted","Data":"34178be40121943385508d9a58a76c792dc8840a1da9b7826295104e33893ced"} Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.381185 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7phc6" event={"ID":"da041f6a-ea31-4e70-b695-aa0fd7e0ce85","Type":"ContainerStarted","Data":"41d31af469ad3c5d1cce658ea6c4dcfb723c1a06b78235cf7a7ef918d08cbcb6"} Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.405212 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.408914 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7phc6" podStartSLOduration=10.408895026 podStartE2EDuration="10.408895026s" podCreationTimestamp="2026-02-16 09:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:22.407333715 +0000 UTC m=+160.100489895" watchObservedRunningTime="2026-02-16 09:48:22.408895026 +0000 UTC m=+160.102051206" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.443857 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.445182 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:22.94516394 +0000 UTC m=+160.638320120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.494073 4814 patch_prober.go:28] interesting pod/router-default-5444994796-9kljz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 09:48:22 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Feb 16 09:48:22 crc kubenswrapper[4814]: [+]process-running ok Feb 16 09:48:22 crc kubenswrapper[4814]: healthz check failed Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.494629 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kljz" podUID="c6caef89-a08c-46ec-b2c8-af0f2b795b02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.510656 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dcts9"] Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.512634 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.521809 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.529176 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dcts9"] Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.545633 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-utilities\") pod \"certified-operators-dcts9\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.545690 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28l9t\" (UniqueName: \"kubernetes.io/projected/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-kube-api-access-28l9t\") pod \"certified-operators-dcts9\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.545732 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.545772 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-catalog-content\") pod \"certified-operators-dcts9\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.546102 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:23.046084753 +0000 UTC m=+160.739240933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.646629 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.646850 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:23.14680906 +0000 UTC m=+160.839965240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.646911 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-utilities\") pod \"certified-operators-dcts9\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.647041 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28l9t\" (UniqueName: \"kubernetes.io/projected/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-kube-api-access-28l9t\") pod \"certified-operators-dcts9\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.647090 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.647179 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-catalog-content\") pod \"certified-operators-dcts9\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.647754 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-catalog-content\") pod \"certified-operators-dcts9\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.647867 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:23.147845828 +0000 UTC m=+160.841002108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.648111 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-utilities\") pod \"certified-operators-dcts9\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.670257 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28l9t\" (UniqueName: \"kubernetes.io/projected/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-kube-api-access-28l9t\") pod \"certified-operators-dcts9\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.704957 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ngqwc"] Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.706084 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.708119 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.731508 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngqwc"] Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.748370 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.748587 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:23.248559795 +0000 UTC m=+160.941715975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.748975 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq697\" (UniqueName: \"kubernetes.io/projected/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-kube-api-access-hq697\") pod \"community-operators-ngqwc\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.749132 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.749505 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:23.24949314 +0000 UTC m=+160.942649320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.749798 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-utilities\") pod \"community-operators-ngqwc\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.749909 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-catalog-content\") pod \"community-operators-ngqwc\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.831150 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.850969 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.851217 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq697\" (UniqueName: \"kubernetes.io/projected/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-kube-api-access-hq697\") pod \"community-operators-ngqwc\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.851258 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-utilities\") pod \"community-operators-ngqwc\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.851273 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-catalog-content\") pod \"community-operators-ngqwc\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.851647 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-catalog-content\") pod \"community-operators-ngqwc\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.851720 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:23.351706167 +0000 UTC m=+161.044862337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.852487 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-utilities\") pod \"community-operators-ngqwc\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.867655 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq697\" (UniqueName: \"kubernetes.io/projected/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-kube-api-access-hq697\") pod \"community-operators-ngqwc\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.908673 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8lsc9"] Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.909973 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.920937 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lsc9"] Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.953184 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.953266 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-utilities\") pod \"certified-operators-8lsc9\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.953296 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9456p\" (UniqueName: \"kubernetes.io/projected/1093e2eb-672e-4aae-8ee6-ffc390592ff8-kube-api-access-9456p\") pod \"certified-operators-8lsc9\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:48:22 crc kubenswrapper[4814]: I0216 09:48:22.953485 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-catalog-content\") pod \"certified-operators-8lsc9\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:48:22 crc kubenswrapper[4814]: E0216 09:48:22.953710 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:23.453692559 +0000 UTC m=+161.146848729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.022512 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.056185 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.056527 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-catalog-content\") pod \"certified-operators-8lsc9\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.056617 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-utilities\") pod \"certified-operators-8lsc9\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.056636 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9456p\" (UniqueName: \"kubernetes.io/projected/1093e2eb-672e-4aae-8ee6-ffc390592ff8-kube-api-access-9456p\") pod \"certified-operators-8lsc9\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:48:23 crc kubenswrapper[4814]: E0216 09:48:23.056962 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 09:48:23.556947372 +0000 UTC m=+161.250103552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.057332 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-catalog-content\") pod \"certified-operators-8lsc9\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.057548 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-utilities\") pod \"certified-operators-8lsc9\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.061042 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dcts9"] Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.083400 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9456p\" (UniqueName: \"kubernetes.io/projected/1093e2eb-672e-4aae-8ee6-ffc390592ff8-kube-api-access-9456p\") pod \"certified-operators-8lsc9\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.106442 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8gg2x"] Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.107786 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.118043 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gg2x"] Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.157976 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rffxw\" (UniqueName: \"kubernetes.io/projected/231208dc-d685-4a03-935e-ac1f6c6f7bf4-kube-api-access-rffxw\") pod \"community-operators-8gg2x\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.158029 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.158052 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-utilities\") pod \"community-operators-8gg2x\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.158198 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-catalog-content\") pod \"community-operators-8gg2x\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:48:23 crc kubenswrapper[4814]: E0216 09:48:23.158613 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 09:48:23.658597205 +0000 UTC m=+161.351753385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tb5k2" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.219878 4814 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T09:48:22.256169957Z","Handler":null,"Name":""} Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.224919 4814 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.224955 4814 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.230443 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.259353 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.259584 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-catalog-content\") pod \"community-operators-8gg2x\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.259641 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rffxw\" (UniqueName: \"kubernetes.io/projected/231208dc-d685-4a03-935e-ac1f6c6f7bf4-kube-api-access-rffxw\") pod \"community-operators-8gg2x\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.259677 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-utilities\") pod \"community-operators-8gg2x\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.260152 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-utilities\") pod \"community-operators-8gg2x\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.261672 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-catalog-content\") pod \"community-operators-8gg2x\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.275353 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.294044 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rffxw\" (UniqueName: \"kubernetes.io/projected/231208dc-d685-4a03-935e-ac1f6c6f7bf4-kube-api-access-rffxw\") pod \"community-operators-8gg2x\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.361318 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.365070 4814 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.365118 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.385664 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tb5k2\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.387735 4814 generic.go:334] "Generic (PLEG): container finished" podID="ae79d44f-eef6-42b4-bd2b-50b9faece115" containerID="ad6034050e134cc4faffa4c5cde1d6dd8ea79a3b8d5f1be70c99e9989ad7b634" exitCode=0 Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.387786 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" event={"ID":"ae79d44f-eef6-42b4-bd2b-50b9faece115","Type":"ContainerDied","Data":"ad6034050e134cc4faffa4c5cde1d6dd8ea79a3b8d5f1be70c99e9989ad7b634"} Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.389438 4814 generic.go:334] "Generic (PLEG): container finished" podID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerID="e3ce23ad16efd8ea46597af363e64f2d98479073e02b8e980006226c9c4e1f62" exitCode=0 Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.390263 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcts9" event={"ID":"3594c0fb-ca70-4560-ba53-a5e217a0ddf7","Type":"ContainerDied","Data":"e3ce23ad16efd8ea46597af363e64f2d98479073e02b8e980006226c9c4e1f62"} Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.390289 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcts9" event={"ID":"3594c0fb-ca70-4560-ba53-a5e217a0ddf7","Type":"ContainerStarted","Data":"15783dc56790d7ee06eb4a3045985a2b8ff5dabe19d9c6dcb642969c6ec779bb"} Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.392129 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.449859 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.499737 4814 patch_prober.go:28] interesting pod/router-default-5444994796-9kljz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 09:48:23 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Feb 16 09:48:23 crc kubenswrapper[4814]: [+]process-running ok Feb 16 09:48:23 crc kubenswrapper[4814]: healthz check failed Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.499796 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kljz" podUID="c6caef89-a08c-46ec-b2c8-af0f2b795b02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.500053 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ngqwc"] Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.508573 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:23 crc kubenswrapper[4814]: W0216 09:48:23.510642 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb772d6e0_ae59_4ddb_b5f8_301ac88ec747.slice/crio-bf1b2990e7a25c140e836e86b341c9c5acf4c749eac6c17685c91a0f3cb4f4a7 WatchSource:0}: Error finding container bf1b2990e7a25c140e836e86b341c9c5acf4c749eac6c17685c91a0f3cb4f4a7: Status 404 returned error can't find the container with id bf1b2990e7a25c140e836e86b341c9c5acf4c749eac6c17685c91a0f3cb4f4a7 Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.534118 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lsc9"] Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.676940 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gg2x"] Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.770775 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tb5k2"] Feb 16 09:48:23 crc kubenswrapper[4814]: W0216 09:48:23.840553 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda02ac473_c7bb_4702_ac42_f0e973d03f05.slice/crio-9f29ff96a57b476a78ec7126adaef861dea17321dd3fc9bcd3773d995901c3d4 WatchSource:0}: Error finding container 9f29ff96a57b476a78ec7126adaef861dea17321dd3fc9bcd3773d995901c3d4: Status 404 returned error can't find the container with id 9f29ff96a57b476a78ec7126adaef861dea17321dd3fc9bcd3773d995901c3d4 Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.865604 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.865665 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.870140 4814 patch_prober.go:28] interesting pod/console-f9d7485db-4xwqr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.870196 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4xwqr" podUID="13dde5e3-1577-420f-9b33-4d89a1a8749a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 16 09:48:23 crc kubenswrapper[4814]: I0216 09:48:23.975331 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.021116 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.031448 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fsxcr" Feb 16 09:48:24 crc kubenswrapper[4814]: E0216 09:48:24.076921 4814 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod231208dc_d685_4a03_935e_ac1f6c6f7bf4.slice/crio-97ec0444972677904d6417fa31da642690b715301ec369cd861fb7f67a0052ff.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod231208dc_d685_4a03_935e_ac1f6c6f7bf4.slice/crio-conmon-97ec0444972677904d6417fa31da642690b715301ec369cd861fb7f67a0052ff.scope\": RecentStats: unable to find data in memory cache]" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.339479 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gfngr" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.430820 4814 generic.go:334] "Generic (PLEG): container finished" podID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" containerID="97ec0444972677904d6417fa31da642690b715301ec369cd861fb7f67a0052ff" exitCode=0 Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.430894 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gg2x" event={"ID":"231208dc-d685-4a03-935e-ac1f6c6f7bf4","Type":"ContainerDied","Data":"97ec0444972677904d6417fa31da642690b715301ec369cd861fb7f67a0052ff"} Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.430929 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gg2x" event={"ID":"231208dc-d685-4a03-935e-ac1f6c6f7bf4","Type":"ContainerStarted","Data":"0d131f97ea1fea35c074edd7ad43023bf850a0a649b06772102e0f80f4ac1743"} Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.436454 4814 generic.go:334] "Generic (PLEG): container finished" podID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerID="0a179a2c3b3e0e415dd88007fe8724483bd061d8f85fb25a7e6cfa68bd258421" exitCode=0 Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.436546 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngqwc" event={"ID":"b772d6e0-ae59-4ddb-b5f8-301ac88ec747","Type":"ContainerDied","Data":"0a179a2c3b3e0e415dd88007fe8724483bd061d8f85fb25a7e6cfa68bd258421"} Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.436587 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngqwc" event={"ID":"b772d6e0-ae59-4ddb-b5f8-301ac88ec747","Type":"ContainerStarted","Data":"bf1b2990e7a25c140e836e86b341c9c5acf4c749eac6c17685c91a0f3cb4f4a7"} Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.451965 4814 generic.go:334] "Generic (PLEG): container finished" podID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" containerID="6947c94419bc7c55ae7e91bbaf59735abf33cd49b2e401aca0c18d8bef233999" exitCode=0 Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.452082 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lsc9" event={"ID":"1093e2eb-672e-4aae-8ee6-ffc390592ff8","Type":"ContainerDied","Data":"6947c94419bc7c55ae7e91bbaf59735abf33cd49b2e401aca0c18d8bef233999"} Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.452112 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lsc9" event={"ID":"1093e2eb-672e-4aae-8ee6-ffc390592ff8","Type":"ContainerStarted","Data":"4899a9e184340c828e5ec61f4e228446e7915f07e0551cb3d36b704aba7c624d"} Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.469091 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" event={"ID":"a02ac473-c7bb-4702-ac42-f0e973d03f05","Type":"ContainerStarted","Data":"a9eb41b6998347f20822c5e1fde8be661124d7c23b7fbe80687f68c76a2edd15"} Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.469138 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" event={"ID":"a02ac473-c7bb-4702-ac42-f0e973d03f05","Type":"ContainerStarted","Data":"9f29ff96a57b476a78ec7126adaef861dea17321dd3fc9bcd3773d995901c3d4"} Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.494713 4814 patch_prober.go:28] interesting pod/router-default-5444994796-9kljz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 09:48:24 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Feb 16 09:48:24 crc kubenswrapper[4814]: [+]process-running ok Feb 16 09:48:24 crc kubenswrapper[4814]: healthz check failed Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.494787 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kljz" podUID="c6caef89-a08c-46ec-b2c8-af0f2b795b02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.508548 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jcv4r"] Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.510243 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.512677 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.566681 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" podStartSLOduration=140.566660073 podStartE2EDuration="2m20.566660073s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:24.564021463 +0000 UTC m=+162.257177663" watchObservedRunningTime="2026-02-16 09:48:24.566660073 +0000 UTC m=+162.259816253" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.576588 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcv4r"] Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.583067 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-catalog-content\") pod \"redhat-marketplace-jcv4r\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.583447 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-utilities\") pod \"redhat-marketplace-jcv4r\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.583774 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2dhq\" (UniqueName: \"kubernetes.io/projected/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-kube-api-access-b2dhq\") pod \"redhat-marketplace-jcv4r\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.686024 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-catalog-content\") pod \"redhat-marketplace-jcv4r\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.686080 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-utilities\") pod \"redhat-marketplace-jcv4r\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.686144 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2dhq\" (UniqueName: \"kubernetes.io/projected/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-kube-api-access-b2dhq\") pod \"redhat-marketplace-jcv4r\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.687139 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-catalog-content\") pod \"redhat-marketplace-jcv4r\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.687423 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-utilities\") pod \"redhat-marketplace-jcv4r\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.722209 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2dhq\" (UniqueName: \"kubernetes.io/projected/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-kube-api-access-b2dhq\") pod \"redhat-marketplace-jcv4r\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.827935 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.878694 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.889592 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9sc9\" (UniqueName: \"kubernetes.io/projected/ae79d44f-eef6-42b4-bd2b-50b9faece115-kube-api-access-z9sc9\") pod \"ae79d44f-eef6-42b4-bd2b-50b9faece115\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.889758 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae79d44f-eef6-42b4-bd2b-50b9faece115-secret-volume\") pod \"ae79d44f-eef6-42b4-bd2b-50b9faece115\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.895518 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae79d44f-eef6-42b4-bd2b-50b9faece115-kube-api-access-z9sc9" (OuterVolumeSpecName: "kube-api-access-z9sc9") pod "ae79d44f-eef6-42b4-bd2b-50b9faece115" (UID: "ae79d44f-eef6-42b4-bd2b-50b9faece115"). InnerVolumeSpecName "kube-api-access-z9sc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.899200 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae79d44f-eef6-42b4-bd2b-50b9faece115-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae79d44f-eef6-42b4-bd2b-50b9faece115" (UID: "ae79d44f-eef6-42b4-bd2b-50b9faece115"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.889861 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae79d44f-eef6-42b4-bd2b-50b9faece115-config-volume\") pod \"ae79d44f-eef6-42b4-bd2b-50b9faece115\" (UID: \"ae79d44f-eef6-42b4-bd2b-50b9faece115\") " Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.902898 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae79d44f-eef6-42b4-bd2b-50b9faece115-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.902929 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9sc9\" (UniqueName: \"kubernetes.io/projected/ae79d44f-eef6-42b4-bd2b-50b9faece115-kube-api-access-z9sc9\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.907729 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-88g2q"] Feb 16 09:48:24 crc kubenswrapper[4814]: E0216 09:48:24.907955 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae79d44f-eef6-42b4-bd2b-50b9faece115" containerName="collect-profiles" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.907967 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae79d44f-eef6-42b4-bd2b-50b9faece115" containerName="collect-profiles" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.908275 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae79d44f-eef6-42b4-bd2b-50b9faece115" containerName="collect-profiles" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.909127 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.909238 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae79d44f-eef6-42b4-bd2b-50b9faece115-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae79d44f-eef6-42b4-bd2b-50b9faece115" (UID: "ae79d44f-eef6-42b4-bd2b-50b9faece115"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:48:24 crc kubenswrapper[4814]: I0216 09:48:24.925443 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-88g2q"] Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.002500 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.004290 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-catalog-content\") pod \"redhat-marketplace-88g2q\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.004400 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2vvq\" (UniqueName: \"kubernetes.io/projected/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-kube-api-access-p2vvq\") pod \"redhat-marketplace-88g2q\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.004438 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-utilities\") pod \"redhat-marketplace-88g2q\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.004730 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae79d44f-eef6-42b4-bd2b-50b9faece115-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.061857 4814 patch_prober.go:28] interesting pod/downloads-7954f5f757-j5fnw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.061917 4814 patch_prober.go:28] interesting pod/downloads-7954f5f757-j5fnw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.062009 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j5fnw" podUID="5d9feb14-2511-4e1e-a78a-e737ae28770c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.061936 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-j5fnw" podUID="5d9feb14-2511-4e1e-a78a-e737ae28770c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.108403 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2vvq\" (UniqueName: \"kubernetes.io/projected/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-kube-api-access-p2vvq\") pod \"redhat-marketplace-88g2q\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.108489 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-utilities\") pod \"redhat-marketplace-88g2q\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.108575 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-catalog-content\") pod \"redhat-marketplace-88g2q\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.109187 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-catalog-content\") pod \"redhat-marketplace-88g2q\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.109869 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-utilities\") pod \"redhat-marketplace-88g2q\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.159467 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2vvq\" (UniqueName: \"kubernetes.io/projected/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-kube-api-access-p2vvq\") pod \"redhat-marketplace-88g2q\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.249174 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.425657 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcv4r"] Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.492214 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.495927 4814 patch_prober.go:28] interesting pod/router-default-5444994796-9kljz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 09:48:25 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Feb 16 09:48:25 crc kubenswrapper[4814]: [+]process-running ok Feb 16 09:48:25 crc kubenswrapper[4814]: healthz check failed Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.496016 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kljz" podUID="c6caef89-a08c-46ec-b2c8-af0f2b795b02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.547964 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.548636 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z" event={"ID":"ae79d44f-eef6-42b4-bd2b-50b9faece115","Type":"ContainerDied","Data":"cf99f0ef0213c9accd23aadfd6d625bf13d1c0bd6683ee7de489d6c0b10ceba3"} Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.548726 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf99f0ef0213c9accd23aadfd6d625bf13d1c0bd6683ee7de489d6c0b10ceba3" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.571188 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcv4r" event={"ID":"0857fc2a-4cdb-4f97-aca4-20a08fc1060a","Type":"ContainerStarted","Data":"8e46ae868fd20131d4586ffb852053c7318cad5b39ff13b121a1aa5cafdee6cb"} Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.571401 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.624055 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-88g2q"] Feb 16 09:48:25 crc kubenswrapper[4814]: W0216 09:48:25.630470 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4ecbcef_c7e9_4e4c_93b3_63d71c4c097c.slice/crio-58a69ebec02a24731a27ada27ff23529a57e15269d6890490b885b5b66958a6e WatchSource:0}: Error finding container 58a69ebec02a24731a27ada27ff23529a57e15269d6890490b885b5b66958a6e: Status 404 returned error can't find the container with id 58a69ebec02a24731a27ada27ff23529a57e15269d6890490b885b5b66958a6e Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.911405 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kz6kn"] Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.913054 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.915047 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.924976 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kz6kn"] Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.928000 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgjtv\" (UniqueName: \"kubernetes.io/projected/afb2178d-394e-4d6b-baf0-8242e537aa1e-kube-api-access-rgjtv\") pod \"redhat-operators-kz6kn\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.928131 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-catalog-content\") pod \"redhat-operators-kz6kn\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:48:25 crc kubenswrapper[4814]: I0216 09:48:25.928172 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-utilities\") pod \"redhat-operators-kz6kn\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.029710 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-catalog-content\") pod \"redhat-operators-kz6kn\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.029785 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-utilities\") pod \"redhat-operators-kz6kn\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.029815 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgjtv\" (UniqueName: \"kubernetes.io/projected/afb2178d-394e-4d6b-baf0-8242e537aa1e-kube-api-access-rgjtv\") pod \"redhat-operators-kz6kn\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.031382 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-catalog-content\") pod \"redhat-operators-kz6kn\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.031628 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-utilities\") pod \"redhat-operators-kz6kn\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.060887 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgjtv\" (UniqueName: \"kubernetes.io/projected/afb2178d-394e-4d6b-baf0-8242e537aa1e-kube-api-access-rgjtv\") pod \"redhat-operators-kz6kn\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.271035 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.273954 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.279374 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.280309 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.284325 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.318214 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-khzgg"] Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.319741 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.335684 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-catalog-content\") pod \"redhat-operators-khzgg\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.335769 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-utilities\") pod \"redhat-operators-khzgg\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.335791 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9f829d5-39b1-4d40-ac2f-25eb1d31b250\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.335849 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9f829d5-39b1-4d40-ac2f-25eb1d31b250\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.335869 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j2zb\" (UniqueName: \"kubernetes.io/projected/37e44ee2-4f8c-44f7-9428-966356c68a90-kube-api-access-4j2zb\") pod \"redhat-operators-khzgg\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.340970 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khzgg"] Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.342039 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.438291 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-utilities\") pod \"redhat-operators-khzgg\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.438341 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9f829d5-39b1-4d40-ac2f-25eb1d31b250\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.438428 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9f829d5-39b1-4d40-ac2f-25eb1d31b250\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.438455 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j2zb\" (UniqueName: \"kubernetes.io/projected/37e44ee2-4f8c-44f7-9428-966356c68a90-kube-api-access-4j2zb\") pod \"redhat-operators-khzgg\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.438916 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-utilities\") pod \"redhat-operators-khzgg\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.439371 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e9f829d5-39b1-4d40-ac2f-25eb1d31b250\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.439746 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-catalog-content\") pod \"redhat-operators-khzgg\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.445774 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-catalog-content\") pod \"redhat-operators-khzgg\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.474654 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j2zb\" (UniqueName: \"kubernetes.io/projected/37e44ee2-4f8c-44f7-9428-966356c68a90-kube-api-access-4j2zb\") pod \"redhat-operators-khzgg\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.478290 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e9f829d5-39b1-4d40-ac2f-25eb1d31b250\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.497713 4814 patch_prober.go:28] interesting pod/router-default-5444994796-9kljz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 09:48:26 crc kubenswrapper[4814]: [-]has-synced failed: reason withheld Feb 16 09:48:26 crc kubenswrapper[4814]: [+]process-running ok Feb 16 09:48:26 crc kubenswrapper[4814]: healthz check failed Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.497804 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kljz" podUID="c6caef89-a08c-46ec-b2c8-af0f2b795b02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.582612 4814 generic.go:334] "Generic (PLEG): container finished" podID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerID="88929228e08268f432c3b6245273809159eae7db169edaeea9818d2cea2cf6c1" exitCode=0 Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.582688 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcv4r" event={"ID":"0857fc2a-4cdb-4f97-aca4-20a08fc1060a","Type":"ContainerDied","Data":"88929228e08268f432c3b6245273809159eae7db169edaeea9818d2cea2cf6c1"} Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.605449 4814 generic.go:334] "Generic (PLEG): container finished" podID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerID="ad1925c57a0b69de13136cde9952938b8521ad25b96bf61af15ca9ba8dc65f2a" exitCode=0 Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.605832 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88g2q" event={"ID":"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c","Type":"ContainerDied","Data":"ad1925c57a0b69de13136cde9952938b8521ad25b96bf61af15ca9ba8dc65f2a"} Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.605907 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88g2q" event={"ID":"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c","Type":"ContainerStarted","Data":"58a69ebec02a24731a27ada27ff23529a57e15269d6890490b885b5b66958a6e"} Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.617681 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 09:48:26 crc kubenswrapper[4814]: I0216 09:48:26.646926 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:48:27 crc kubenswrapper[4814]: W0216 09:48:27.084604 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafb2178d_394e_4d6b_baf0_8242e537aa1e.slice/crio-4c6ffd85536ac25266685f5c6f5b7d8c54315787c7003e78507603d3663c4e12 WatchSource:0}: Error finding container 4c6ffd85536ac25266685f5c6f5b7d8c54315787c7003e78507603d3663c4e12: Status 404 returned error can't find the container with id 4c6ffd85536ac25266685f5c6f5b7d8c54315787c7003e78507603d3663c4e12 Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.100052 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kz6kn"] Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.155709 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.163027 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/83343376-433f-46da-b90f-9e1dd9270ea4-metrics-certs\") pod \"network-metrics-daemon-l9dlr\" (UID: \"83343376-433f-46da-b90f-9e1dd9270ea4\") " pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.164263 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.183915 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-khzgg"] Feb 16 09:48:27 crc kubenswrapper[4814]: W0216 09:48:27.198257 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode9f829d5_39b1_4d40_ac2f_25eb1d31b250.slice/crio-564f9ed52e20811fca3cb32cc2acdca7326a3bce3953284b4ba0c1bd5bb6620d WatchSource:0}: Error finding container 564f9ed52e20811fca3cb32cc2acdca7326a3bce3953284b4ba0c1bd5bb6620d: Status 404 returned error can't find the container with id 564f9ed52e20811fca3cb32cc2acdca7326a3bce3953284b4ba0c1bd5bb6620d Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.259032 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9dlr" Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.513495 4814 patch_prober.go:28] interesting pod/router-default-5444994796-9kljz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 09:48:27 crc kubenswrapper[4814]: [+]has-synced ok Feb 16 09:48:27 crc kubenswrapper[4814]: [+]process-running ok Feb 16 09:48:27 crc kubenswrapper[4814]: healthz check failed Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.513989 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-9kljz" podUID="c6caef89-a08c-46ec-b2c8-af0f2b795b02" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.642876 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9f829d5-39b1-4d40-ac2f-25eb1d31b250","Type":"ContainerStarted","Data":"564f9ed52e20811fca3cb32cc2acdca7326a3bce3953284b4ba0c1bd5bb6620d"} Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.645843 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz6kn" event={"ID":"afb2178d-394e-4d6b-baf0-8242e537aa1e","Type":"ContainerStarted","Data":"4c6ffd85536ac25266685f5c6f5b7d8c54315787c7003e78507603d3663c4e12"} Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.648232 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khzgg" event={"ID":"37e44ee2-4f8c-44f7-9428-966356c68a90","Type":"ContainerStarted","Data":"40de042b068eaf3d4669fe9404d99a5c3f5d597216e7e842dc2106af1164f680"} Feb 16 09:48:27 crc kubenswrapper[4814]: I0216 09:48:27.897046 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l9dlr"] Feb 16 09:48:28 crc kubenswrapper[4814]: I0216 09:48:28.495590 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:28 crc kubenswrapper[4814]: I0216 09:48:28.498623 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-9kljz" Feb 16 09:48:28 crc kubenswrapper[4814]: I0216 09:48:28.666258 4814 generic.go:334] "Generic (PLEG): container finished" podID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerID="ff6dcacf6904c74e7860b58bb36bade35fe1559f8406d84384270e53c3d1e4e3" exitCode=0 Feb 16 09:48:28 crc kubenswrapper[4814]: I0216 09:48:28.666370 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khzgg" event={"ID":"37e44ee2-4f8c-44f7-9428-966356c68a90","Type":"ContainerDied","Data":"ff6dcacf6904c74e7860b58bb36bade35fe1559f8406d84384270e53c3d1e4e3"} Feb 16 09:48:28 crc kubenswrapper[4814]: I0216 09:48:28.674166 4814 generic.go:334] "Generic (PLEG): container finished" podID="e9f829d5-39b1-4d40-ac2f-25eb1d31b250" containerID="64fe5d79d28d193e32ebc4efb3294c62352fc0b347f56f8424a5542ec6b33765" exitCode=0 Feb 16 09:48:28 crc kubenswrapper[4814]: I0216 09:48:28.674510 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9f829d5-39b1-4d40-ac2f-25eb1d31b250","Type":"ContainerDied","Data":"64fe5d79d28d193e32ebc4efb3294c62352fc0b347f56f8424a5542ec6b33765"} Feb 16 09:48:28 crc kubenswrapper[4814]: I0216 09:48:28.683674 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" event={"ID":"83343376-433f-46da-b90f-9e1dd9270ea4","Type":"ContainerStarted","Data":"4e80539612a61e29b17be9e44b23569a69724126482e6e184b2db7d96fc1f7ea"} Feb 16 09:48:28 crc kubenswrapper[4814]: I0216 09:48:28.683780 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" event={"ID":"83343376-433f-46da-b90f-9e1dd9270ea4","Type":"ContainerStarted","Data":"6da47ea43930e92ebdef6792c67acec9fb00582c097ac56c3081397abf398308"} Feb 16 09:48:28 crc kubenswrapper[4814]: I0216 09:48:28.727662 4814 generic.go:334] "Generic (PLEG): container finished" podID="afb2178d-394e-4d6b-baf0-8242e537aa1e" containerID="2bc27b0bc0360de9722db81828f289841705b9e35d296b3b777d3ecc936515f7" exitCode=0 Feb 16 09:48:28 crc kubenswrapper[4814]: I0216 09:48:28.728597 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz6kn" event={"ID":"afb2178d-394e-4d6b-baf0-8242e537aa1e","Type":"ContainerDied","Data":"2bc27b0bc0360de9722db81828f289841705b9e35d296b3b777d3ecc936515f7"} Feb 16 09:48:29 crc kubenswrapper[4814]: I0216 09:48:29.722108 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 09:48:29 crc kubenswrapper[4814]: I0216 09:48:29.735135 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 09:48:29 crc kubenswrapper[4814]: I0216 09:48:29.742571 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 09:48:29 crc kubenswrapper[4814]: I0216 09:48:29.742932 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 09:48:29 crc kubenswrapper[4814]: I0216 09:48:29.784754 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 09:48:29 crc kubenswrapper[4814]: I0216 09:48:29.792849 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l9dlr" event={"ID":"83343376-433f-46da-b90f-9e1dd9270ea4","Type":"ContainerStarted","Data":"448e1e1f06885c7e684f98a6f11cb887e661ba08a8cded73d4f2532cc35bb4a8"} Feb 16 09:48:29 crc kubenswrapper[4814]: I0216 09:48:29.927390 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 09:48:29 crc kubenswrapper[4814]: I0216 09:48:29.927592 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.067401 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.067497 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.067631 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.122105 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.365339 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.388204 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-l9dlr" podStartSLOduration=146.38817647 podStartE2EDuration="2m26.38817647s" podCreationTimestamp="2026-02-16 09:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:29.81295662 +0000 UTC m=+167.506112830" watchObservedRunningTime="2026-02-16 09:48:30.38817647 +0000 UTC m=+168.081332650" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.394811 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.488821 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kube-api-access\") pod \"e9f829d5-39b1-4d40-ac2f-25eb1d31b250\" (UID: \"e9f829d5-39b1-4d40-ac2f-25eb1d31b250\") " Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.488943 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kubelet-dir\") pod \"e9f829d5-39b1-4d40-ac2f-25eb1d31b250\" (UID: \"e9f829d5-39b1-4d40-ac2f-25eb1d31b250\") " Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.489286 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e9f829d5-39b1-4d40-ac2f-25eb1d31b250" (UID: "e9f829d5-39b1-4d40-ac2f-25eb1d31b250"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.495337 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e9f829d5-39b1-4d40-ac2f-25eb1d31b250" (UID: "e9f829d5-39b1-4d40-ac2f-25eb1d31b250"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.592368 4814 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.592418 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e9f829d5-39b1-4d40-ac2f-25eb1d31b250-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.714064 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-b9r5t" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.748031 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.824343 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e9f829d5-39b1-4d40-ac2f-25eb1d31b250","Type":"ContainerDied","Data":"564f9ed52e20811fca3cb32cc2acdca7326a3bce3953284b4ba0c1bd5bb6620d"} Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.824405 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="564f9ed52e20811fca3cb32cc2acdca7326a3bce3953284b4ba0c1bd5bb6620d" Feb 16 09:48:30 crc kubenswrapper[4814]: I0216 09:48:30.824358 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 09:48:30 crc kubenswrapper[4814]: W0216 09:48:30.981857 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod12d19779_08c9_45a2_a9cb_0ab5fd6e8b2a.slice/crio-0e2a61eeaeb4371ef4369b168a45f5318343221979f1da2c956a1027ff1ce727 WatchSource:0}: Error finding container 0e2a61eeaeb4371ef4369b168a45f5318343221979f1da2c956a1027ff1ce727: Status 404 returned error can't find the container with id 0e2a61eeaeb4371ef4369b168a45f5318343221979f1da2c956a1027ff1ce727 Feb 16 09:48:31 crc kubenswrapper[4814]: I0216 09:48:31.364144 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:48:31 crc kubenswrapper[4814]: I0216 09:48:31.867949 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a","Type":"ContainerStarted","Data":"1e01566293a8f9cb13e3d380f9134ffbf7bd709d81513a81314bec0e24e49cd2"} Feb 16 09:48:31 crc kubenswrapper[4814]: I0216 09:48:31.868288 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a","Type":"ContainerStarted","Data":"0e2a61eeaeb4371ef4369b168a45f5318343221979f1da2c956a1027ff1ce727"} Feb 16 09:48:31 crc kubenswrapper[4814]: I0216 09:48:31.910994 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.910975379 podStartE2EDuration="2.910975379s" podCreationTimestamp="2026-02-16 09:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:48:31.89556844 +0000 UTC m=+169.588724630" watchObservedRunningTime="2026-02-16 09:48:31.910975379 +0000 UTC m=+169.604131559" Feb 16 09:48:32 crc kubenswrapper[4814]: I0216 09:48:32.898122 4814 generic.go:334] "Generic (PLEG): container finished" podID="12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a" containerID="1e01566293a8f9cb13e3d380f9134ffbf7bd709d81513a81314bec0e24e49cd2" exitCode=0 Feb 16 09:48:32 crc kubenswrapper[4814]: I0216 09:48:32.898182 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a","Type":"ContainerDied","Data":"1e01566293a8f9cb13e3d380f9134ffbf7bd709d81513a81314bec0e24e49cd2"} Feb 16 09:48:33 crc kubenswrapper[4814]: I0216 09:48:33.932072 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:33 crc kubenswrapper[4814]: I0216 09:48:33.941134 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 09:48:35 crc kubenswrapper[4814]: I0216 09:48:35.058765 4814 patch_prober.go:28] interesting pod/downloads-7954f5f757-j5fnw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 09:48:35 crc kubenswrapper[4814]: I0216 09:48:35.059079 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-j5fnw" podUID="5d9feb14-2511-4e1e-a78a-e737ae28770c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 09:48:35 crc kubenswrapper[4814]: I0216 09:48:35.058988 4814 patch_prober.go:28] interesting pod/downloads-7954f5f757-j5fnw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 16 09:48:35 crc kubenswrapper[4814]: I0216 09:48:35.059182 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-j5fnw" podUID="5d9feb14-2511-4e1e-a78a-e737ae28770c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 16 09:48:37 crc kubenswrapper[4814]: I0216 09:48:37.959855 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:48:37 crc kubenswrapper[4814]: I0216 09:48:37.960220 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:48:41 crc kubenswrapper[4814]: I0216 09:48:41.100371 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 09:48:41 crc kubenswrapper[4814]: I0216 09:48:41.863346 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 09:48:41 crc kubenswrapper[4814]: I0216 09:48:41.968746 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a","Type":"ContainerDied","Data":"0e2a61eeaeb4371ef4369b168a45f5318343221979f1da2c956a1027ff1ce727"} Feb 16 09:48:41 crc kubenswrapper[4814]: I0216 09:48:41.968788 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e2a61eeaeb4371ef4369b168a45f5318343221979f1da2c956a1027ff1ce727" Feb 16 09:48:41 crc kubenswrapper[4814]: I0216 09:48:41.968801 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 09:48:41 crc kubenswrapper[4814]: I0216 09:48:41.984090 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kubelet-dir\") pod \"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a\" (UID: \"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a\") " Feb 16 09:48:41 crc kubenswrapper[4814]: I0216 09:48:41.984204 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a" (UID: "12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:48:41 crc kubenswrapper[4814]: I0216 09:48:41.984643 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kube-api-access\") pod \"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a\" (UID: \"12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a\") " Feb 16 09:48:41 crc kubenswrapper[4814]: I0216 09:48:41.984961 4814 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:41 crc kubenswrapper[4814]: I0216 09:48:41.991446 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a" (UID: "12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:48:42 crc kubenswrapper[4814]: I0216 09:48:42.086522 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:42 crc kubenswrapper[4814]: I0216 09:48:42.526608 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jt6sp"] Feb 16 09:48:42 crc kubenswrapper[4814]: I0216 09:48:42.527084 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" podUID="e498024a-b042-4d7c-9f47-4140b465bd63" containerName="controller-manager" containerID="cri-o://3eb8ef92c17a91eb3541164a619fa713057d16381ff10bdd124c9e6d8241c13f" gracePeriod=30 Feb 16 09:48:42 crc kubenswrapper[4814]: I0216 09:48:42.543711 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2"] Feb 16 09:48:42 crc kubenswrapper[4814]: I0216 09:48:42.544084 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" podUID="9c0e0223-e440-4b15-8183-41940ec62701" containerName="route-controller-manager" containerID="cri-o://c770a6f1488ec33d836595d1791d3ccc84d5444cd97ab434cca791a6d598a63b" gracePeriod=30 Feb 16 09:48:42 crc kubenswrapper[4814]: I0216 09:48:42.979375 4814 generic.go:334] "Generic (PLEG): container finished" podID="e498024a-b042-4d7c-9f47-4140b465bd63" containerID="3eb8ef92c17a91eb3541164a619fa713057d16381ff10bdd124c9e6d8241c13f" exitCode=0 Feb 16 09:48:42 crc kubenswrapper[4814]: I0216 09:48:42.979446 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" event={"ID":"e498024a-b042-4d7c-9f47-4140b465bd63","Type":"ContainerDied","Data":"3eb8ef92c17a91eb3541164a619fa713057d16381ff10bdd124c9e6d8241c13f"} Feb 16 09:48:43 crc kubenswrapper[4814]: I0216 09:48:43.513954 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:48:43 crc kubenswrapper[4814]: I0216 09:48:43.967740 4814 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jt6sp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 09:48:43 crc kubenswrapper[4814]: I0216 09:48:43.967862 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" podUID="e498024a-b042-4d7c-9f47-4140b465bd63" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 09:48:43 crc kubenswrapper[4814]: I0216 09:48:43.987478 4814 generic.go:334] "Generic (PLEG): container finished" podID="9c0e0223-e440-4b15-8183-41940ec62701" containerID="c770a6f1488ec33d836595d1791d3ccc84d5444cd97ab434cca791a6d598a63b" exitCode=0 Feb 16 09:48:43 crc kubenswrapper[4814]: I0216 09:48:43.987576 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" event={"ID":"9c0e0223-e440-4b15-8183-41940ec62701","Type":"ContainerDied","Data":"c770a6f1488ec33d836595d1791d3ccc84d5444cd97ab434cca791a6d598a63b"} Feb 16 09:48:45 crc kubenswrapper[4814]: I0216 09:48:45.079065 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-j5fnw" Feb 16 09:48:46 crc kubenswrapper[4814]: I0216 09:48:46.142170 4814 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-d67m2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 09:48:46 crc kubenswrapper[4814]: I0216 09:48:46.142729 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" podUID="9c0e0223-e440-4b15-8183-41940ec62701" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.945862 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.951942 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.974439 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t"] Feb 16 09:48:50 crc kubenswrapper[4814]: E0216 09:48:50.974710 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e498024a-b042-4d7c-9f47-4140b465bd63" containerName="controller-manager" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.974725 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e498024a-b042-4d7c-9f47-4140b465bd63" containerName="controller-manager" Feb 16 09:48:50 crc kubenswrapper[4814]: E0216 09:48:50.974740 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f829d5-39b1-4d40-ac2f-25eb1d31b250" containerName="pruner" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.974747 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f829d5-39b1-4d40-ac2f-25eb1d31b250" containerName="pruner" Feb 16 09:48:50 crc kubenswrapper[4814]: E0216 09:48:50.974757 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0e0223-e440-4b15-8183-41940ec62701" containerName="route-controller-manager" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.974763 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0e0223-e440-4b15-8183-41940ec62701" containerName="route-controller-manager" Feb 16 09:48:50 crc kubenswrapper[4814]: E0216 09:48:50.974771 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a" containerName="pruner" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.974777 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a" containerName="pruner" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.974875 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0e0223-e440-4b15-8183-41940ec62701" containerName="route-controller-manager" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.974889 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f829d5-39b1-4d40-ac2f-25eb1d31b250" containerName="pruner" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.974900 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e498024a-b042-4d7c-9f47-4140b465bd63" containerName="controller-manager" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.974908 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d19779-08c9-45a2-a9cb-0ab5fd6e8b2a" containerName="pruner" Feb 16 09:48:50 crc kubenswrapper[4814]: I0216 09:48:50.975311 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.023436 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t"] Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.037481 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" event={"ID":"e498024a-b042-4d7c-9f47-4140b465bd63","Type":"ContainerDied","Data":"b1588a51f03b6b5af77176d3ebbba0b4e705fbc551f2e7bf84cc37eeb9d94622"} Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.037575 4814 scope.go:117] "RemoveContainer" containerID="3eb8ef92c17a91eb3541164a619fa713057d16381ff10bdd124c9e6d8241c13f" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.037704 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jt6sp" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.038842 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb8ht\" (UniqueName: \"kubernetes.io/projected/9c0e0223-e440-4b15-8183-41940ec62701-kube-api-access-fb8ht\") pod \"9c0e0223-e440-4b15-8183-41940ec62701\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.038933 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e498024a-b042-4d7c-9f47-4140b465bd63-serving-cert\") pod \"e498024a-b042-4d7c-9f47-4140b465bd63\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.039163 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx26f\" (UniqueName: \"kubernetes.io/projected/e498024a-b042-4d7c-9f47-4140b465bd63-kube-api-access-xx26f\") pod \"e498024a-b042-4d7c-9f47-4140b465bd63\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.039197 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-client-ca\") pod \"9c0e0223-e440-4b15-8183-41940ec62701\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.039221 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-proxy-ca-bundles\") pod \"e498024a-b042-4d7c-9f47-4140b465bd63\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.039248 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0e0223-e440-4b15-8183-41940ec62701-serving-cert\") pod \"9c0e0223-e440-4b15-8183-41940ec62701\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.039283 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-config\") pod \"9c0e0223-e440-4b15-8183-41940ec62701\" (UID: \"9c0e0223-e440-4b15-8183-41940ec62701\") " Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.039307 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-client-ca\") pod \"e498024a-b042-4d7c-9f47-4140b465bd63\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.039343 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-config\") pod \"e498024a-b042-4d7c-9f47-4140b465bd63\" (UID: \"e498024a-b042-4d7c-9f47-4140b465bd63\") " Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.039516 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-client-ca\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.039560 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8030103a-2e5a-4035-9c42-b64787681b23-serving-cert\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.039600 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gx64\" (UniqueName: \"kubernetes.io/projected/8030103a-2e5a-4035-9c42-b64787681b23-kube-api-access-5gx64\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.039624 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-config\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.040602 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-client-ca" (OuterVolumeSpecName: "client-ca") pod "9c0e0223-e440-4b15-8183-41940ec62701" (UID: "9c0e0223-e440-4b15-8183-41940ec62701"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.040623 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-config" (OuterVolumeSpecName: "config") pod "9c0e0223-e440-4b15-8183-41940ec62701" (UID: "9c0e0223-e440-4b15-8183-41940ec62701"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.040995 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e498024a-b042-4d7c-9f47-4140b465bd63" (UID: "e498024a-b042-4d7c-9f47-4140b465bd63"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.041251 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-client-ca" (OuterVolumeSpecName: "client-ca") pod "e498024a-b042-4d7c-9f47-4140b465bd63" (UID: "e498024a-b042-4d7c-9f47-4140b465bd63"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.041637 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-config" (OuterVolumeSpecName: "config") pod "e498024a-b042-4d7c-9f47-4140b465bd63" (UID: "e498024a-b042-4d7c-9f47-4140b465bd63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.045941 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" event={"ID":"9c0e0223-e440-4b15-8183-41940ec62701","Type":"ContainerDied","Data":"b58411e5c3f31ea198bca54e1cd9ff97fee1d410dca6dc2821cdd46a9d3ee53f"} Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.046034 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.058072 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0e0223-e440-4b15-8183-41940ec62701-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9c0e0223-e440-4b15-8183-41940ec62701" (UID: "9c0e0223-e440-4b15-8183-41940ec62701"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.058974 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0e0223-e440-4b15-8183-41940ec62701-kube-api-access-fb8ht" (OuterVolumeSpecName: "kube-api-access-fb8ht") pod "9c0e0223-e440-4b15-8183-41940ec62701" (UID: "9c0e0223-e440-4b15-8183-41940ec62701"). InnerVolumeSpecName "kube-api-access-fb8ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.072968 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e498024a-b042-4d7c-9f47-4140b465bd63-kube-api-access-xx26f" (OuterVolumeSpecName: "kube-api-access-xx26f") pod "e498024a-b042-4d7c-9f47-4140b465bd63" (UID: "e498024a-b042-4d7c-9f47-4140b465bd63"). InnerVolumeSpecName "kube-api-access-xx26f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.073981 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e498024a-b042-4d7c-9f47-4140b465bd63-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e498024a-b042-4d7c-9f47-4140b465bd63" (UID: "e498024a-b042-4d7c-9f47-4140b465bd63"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.141726 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8030103a-2e5a-4035-9c42-b64787681b23-serving-cert\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.141850 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gx64\" (UniqueName: \"kubernetes.io/projected/8030103a-2e5a-4035-9c42-b64787681b23-kube-api-access-5gx64\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.141907 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-config\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.142023 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-client-ca\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.142079 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e498024a-b042-4d7c-9f47-4140b465bd63-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.142099 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx26f\" (UniqueName: \"kubernetes.io/projected/e498024a-b042-4d7c-9f47-4140b465bd63-kube-api-access-xx26f\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.142112 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.142123 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.142137 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0e0223-e440-4b15-8183-41940ec62701-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.142149 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0e0223-e440-4b15-8183-41940ec62701-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.142161 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.142175 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e498024a-b042-4d7c-9f47-4140b465bd63-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.142186 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb8ht\" (UniqueName: \"kubernetes.io/projected/9c0e0223-e440-4b15-8183-41940ec62701-kube-api-access-fb8ht\") on node \"crc\" DevicePath \"\"" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.143349 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-client-ca\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.144129 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-config\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.145957 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8030103a-2e5a-4035-9c42-b64787681b23-serving-cert\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.160658 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gx64\" (UniqueName: \"kubernetes.io/projected/8030103a-2e5a-4035-9c42-b64787681b23-kube-api-access-5gx64\") pod \"route-controller-manager-6867996bd5-m6z5t\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.313438 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.378877 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jt6sp"] Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.392700 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jt6sp"] Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.398877 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2"] Feb 16 09:48:51 crc kubenswrapper[4814]: I0216 09:48:51.426087 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-d67m2"] Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.001843 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0e0223-e440-4b15-8183-41940ec62701" path="/var/lib/kubelet/pods/9c0e0223-e440-4b15-8183-41940ec62701/volumes" Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.002461 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e498024a-b042-4d7c-9f47-4140b465bd63" path="/var/lib/kubelet/pods/e498024a-b042-4d7c-9f47-4140b465bd63/volumes" Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.914061 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-844596c887-hwf5q"] Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.915073 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.921878 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.923786 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.923861 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.923796 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.924211 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.925318 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.929146 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-844596c887-hwf5q"] Feb 16 09:48:53 crc kubenswrapper[4814]: I0216 09:48:53.968811 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.084255 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-serving-cert\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.084328 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsrdb\" (UniqueName: \"kubernetes.io/projected/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-kube-api-access-xsrdb\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.084402 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-config\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.084469 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-proxy-ca-bundles\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.084656 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-client-ca\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.186019 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-serving-cert\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.186101 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsrdb\" (UniqueName: \"kubernetes.io/projected/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-kube-api-access-xsrdb\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.186167 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-config\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.186228 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-proxy-ca-bundles\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.186307 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-client-ca\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.188161 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-client-ca\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.190368 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-serving-cert\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.212034 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsrdb\" (UniqueName: \"kubernetes.io/projected/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-kube-api-access-xsrdb\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.459841 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-config\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.465097 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-proxy-ca-bundles\") pod \"controller-manager-844596c887-hwf5q\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:54 crc kubenswrapper[4814]: I0216 09:48:54.582451 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:48:55 crc kubenswrapper[4814]: E0216 09:48:55.232681 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 09:48:55 crc kubenswrapper[4814]: E0216 09:48:55.233184 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28l9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dcts9_openshift-marketplace(3594c0fb-ca70-4560-ba53-a5e217a0ddf7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 09:48:55 crc kubenswrapper[4814]: E0216 09:48:55.234751 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dcts9" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" Feb 16 09:48:55 crc kubenswrapper[4814]: I0216 09:48:55.569495 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-th4pf" Feb 16 09:48:56 crc kubenswrapper[4814]: E0216 09:48:56.461812 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 16 09:48:56 crc kubenswrapper[4814]: E0216 09:48:56.462725 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9456p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8lsc9_openshift-marketplace(1093e2eb-672e-4aae-8ee6-ffc390592ff8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 09:48:56 crc kubenswrapper[4814]: E0216 09:48:56.463971 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8lsc9" podUID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" Feb 16 09:48:57 crc kubenswrapper[4814]: E0216 09:48:57.332654 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8lsc9" podUID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" Feb 16 09:48:57 crc kubenswrapper[4814]: E0216 09:48:57.336845 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dcts9" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" Feb 16 09:48:57 crc kubenswrapper[4814]: E0216 09:48:57.437184 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 09:48:57 crc kubenswrapper[4814]: E0216 09:48:57.438030 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rffxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8gg2x_openshift-marketplace(231208dc-d685-4a03-935e-ac1f6c6f7bf4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 09:48:57 crc kubenswrapper[4814]: E0216 09:48:57.440544 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8gg2x" podUID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" Feb 16 09:48:57 crc kubenswrapper[4814]: E0216 09:48:57.468121 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 16 09:48:57 crc kubenswrapper[4814]: E0216 09:48:57.468293 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hq697,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ngqwc_openshift-marketplace(b772d6e0-ae59-4ddb-b5f8-301ac88ec747): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 09:48:57 crc kubenswrapper[4814]: E0216 09:48:57.469548 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ngqwc" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" Feb 16 09:49:01 crc kubenswrapper[4814]: E0216 09:49:01.323380 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8gg2x" podUID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" Feb 16 09:49:01 crc kubenswrapper[4814]: E0216 09:49:01.323390 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ngqwc" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" Feb 16 09:49:01 crc kubenswrapper[4814]: E0216 09:49:01.422292 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 09:49:01 crc kubenswrapper[4814]: E0216 09:49:01.422521 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4j2zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-khzgg_openshift-marketplace(37e44ee2-4f8c-44f7-9428-966356c68a90): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 09:49:01 crc kubenswrapper[4814]: E0216 09:49:01.424520 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-khzgg" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" Feb 16 09:49:01 crc kubenswrapper[4814]: E0216 09:49:01.440489 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 09:49:01 crc kubenswrapper[4814]: E0216 09:49:01.440688 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgjtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kz6kn_openshift-marketplace(afb2178d-394e-4d6b-baf0-8242e537aa1e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 09:49:01 crc kubenswrapper[4814]: E0216 09:49:01.442051 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kz6kn" podUID="afb2178d-394e-4d6b-baf0-8242e537aa1e" Feb 16 09:49:02 crc kubenswrapper[4814]: I0216 09:49:02.486063 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-844596c887-hwf5q"] Feb 16 09:49:02 crc kubenswrapper[4814]: I0216 09:49:02.582986 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t"] Feb 16 09:49:02 crc kubenswrapper[4814]: E0216 09:49:02.891236 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kz6kn" podUID="afb2178d-394e-4d6b-baf0-8242e537aa1e" Feb 16 09:49:02 crc kubenswrapper[4814]: E0216 09:49:02.891298 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-khzgg" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" Feb 16 09:49:02 crc kubenswrapper[4814]: I0216 09:49:02.919310 4814 scope.go:117] "RemoveContainer" containerID="c770a6f1488ec33d836595d1791d3ccc84d5444cd97ab434cca791a6d598a63b" Feb 16 09:49:02 crc kubenswrapper[4814]: E0216 09:49:02.971995 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 16 09:49:02 crc kubenswrapper[4814]: E0216 09:49:02.972216 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2vvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-88g2q_openshift-marketplace(b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 09:49:02 crc kubenswrapper[4814]: E0216 09:49:02.973561 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-88g2q" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" Feb 16 09:49:02 crc kubenswrapper[4814]: E0216 09:49:02.977853 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 16 09:49:02 crc kubenswrapper[4814]: E0216 09:49:02.977997 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2dhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jcv4r_openshift-marketplace(0857fc2a-4cdb-4f97-aca4-20a08fc1060a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 09:49:02 crc kubenswrapper[4814]: E0216 09:49:02.979154 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jcv4r" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" Feb 16 09:49:03 crc kubenswrapper[4814]: I0216 09:49:03.143308 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-844596c887-hwf5q"] Feb 16 09:49:03 crc kubenswrapper[4814]: E0216 09:49:03.143461 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jcv4r" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" Feb 16 09:49:03 crc kubenswrapper[4814]: E0216 09:49:03.145226 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-88g2q" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" Feb 16 09:49:03 crc kubenswrapper[4814]: I0216 09:49:03.205329 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t"] Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.149361 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" event={"ID":"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62","Type":"ContainerStarted","Data":"00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f"} Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.150023 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" event={"ID":"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62","Type":"ContainerStarted","Data":"d6c3ab7749a4472c9fe00608539ca4a734367022de1cfad0f687fad14167acef"} Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.149811 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" podUID="e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" containerName="controller-manager" containerID="cri-o://00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f" gracePeriod=30 Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.151234 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.154232 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" event={"ID":"8030103a-2e5a-4035-9c42-b64787681b23","Type":"ContainerStarted","Data":"ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f"} Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.154334 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" event={"ID":"8030103a-2e5a-4035-9c42-b64787681b23","Type":"ContainerStarted","Data":"374cce71a53ab6b3dde6522e9c1d24242683fc72d7edb1fac1e09fecc141ff9b"} Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.154448 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" podUID="8030103a-2e5a-4035-9c42-b64787681b23" containerName="route-controller-manager" containerID="cri-o://ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f" gracePeriod=30 Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.154853 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.170614 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.175916 4814 patch_prober.go:28] interesting pod/controller-manager-844596c887-hwf5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:47702->10.217.0.55:8443: read: connection reset by peer" start-of-body= Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.176001 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" podUID="e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:47702->10.217.0.55:8443: read: connection reset by peer" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.214696 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" podStartSLOduration=22.214666509 podStartE2EDuration="22.214666509s" podCreationTimestamp="2026-02-16 09:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:49:04.211792785 +0000 UTC m=+201.904948985" watchObservedRunningTime="2026-02-16 09:49:04.214666509 +0000 UTC m=+201.907822689" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.217284 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" podStartSLOduration=22.217276855 podStartE2EDuration="22.217276855s" podCreationTimestamp="2026-02-16 09:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:49:04.188803166 +0000 UTC m=+201.881959366" watchObservedRunningTime="2026-02-16 09:49:04.217276855 +0000 UTC m=+201.910433035" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.604935 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.615134 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.642286 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z"] Feb 16 09:49:04 crc kubenswrapper[4814]: E0216 09:49:04.642675 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8030103a-2e5a-4035-9c42-b64787681b23" containerName="route-controller-manager" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.642695 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="8030103a-2e5a-4035-9c42-b64787681b23" containerName="route-controller-manager" Feb 16 09:49:04 crc kubenswrapper[4814]: E0216 09:49:04.642714 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" containerName="controller-manager" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.642723 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" containerName="controller-manager" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.642932 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="8030103a-2e5a-4035-9c42-b64787681b23" containerName="route-controller-manager" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.642953 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" containerName="controller-manager" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.644030 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.658411 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z"] Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.775745 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-proxy-ca-bundles\") pod \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.775994 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-config\") pod \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.776031 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-serving-cert\") pod \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.776054 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gx64\" (UniqueName: \"kubernetes.io/projected/8030103a-2e5a-4035-9c42-b64787681b23-kube-api-access-5gx64\") pod \"8030103a-2e5a-4035-9c42-b64787681b23\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.776085 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-client-ca\") pod \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.776118 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsrdb\" (UniqueName: \"kubernetes.io/projected/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-kube-api-access-xsrdb\") pod \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\" (UID: \"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62\") " Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.776154 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8030103a-2e5a-4035-9c42-b64787681b23-serving-cert\") pod \"8030103a-2e5a-4035-9c42-b64787681b23\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.776170 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-config\") pod \"8030103a-2e5a-4035-9c42-b64787681b23\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.776231 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-client-ca\") pod \"8030103a-2e5a-4035-9c42-b64787681b23\" (UID: \"8030103a-2e5a-4035-9c42-b64787681b23\") " Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.776498 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8x6m\" (UniqueName: \"kubernetes.io/projected/59718852-dd08-40dc-8b71-293e9b12f92d-kube-api-access-z8x6m\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.776562 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-config\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.776580 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-client-ca\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.776604 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59718852-dd08-40dc-8b71-293e9b12f92d-serving-cert\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.777444 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-client-ca" (OuterVolumeSpecName: "client-ca") pod "e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" (UID: "e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.777648 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-config" (OuterVolumeSpecName: "config") pod "8030103a-2e5a-4035-9c42-b64787681b23" (UID: "8030103a-2e5a-4035-9c42-b64787681b23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.777900 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-config" (OuterVolumeSpecName: "config") pod "e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" (UID: "e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.778001 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" (UID: "e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.778769 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-client-ca" (OuterVolumeSpecName: "client-ca") pod "8030103a-2e5a-4035-9c42-b64787681b23" (UID: "8030103a-2e5a-4035-9c42-b64787681b23"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.783130 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-kube-api-access-xsrdb" (OuterVolumeSpecName: "kube-api-access-xsrdb") pod "e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" (UID: "e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62"). InnerVolumeSpecName "kube-api-access-xsrdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.783336 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8030103a-2e5a-4035-9c42-b64787681b23-kube-api-access-5gx64" (OuterVolumeSpecName: "kube-api-access-5gx64") pod "8030103a-2e5a-4035-9c42-b64787681b23" (UID: "8030103a-2e5a-4035-9c42-b64787681b23"). InnerVolumeSpecName "kube-api-access-5gx64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.783567 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" (UID: "e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.783586 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8030103a-2e5a-4035-9c42-b64787681b23-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8030103a-2e5a-4035-9c42-b64787681b23" (UID: "8030103a-2e5a-4035-9c42-b64787681b23"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.877859 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8x6m\" (UniqueName: \"kubernetes.io/projected/59718852-dd08-40dc-8b71-293e9b12f92d-kube-api-access-z8x6m\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.877951 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-config\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.877979 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-client-ca\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.878031 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59718852-dd08-40dc-8b71-293e9b12f92d-serving-cert\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.878084 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.878100 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.878268 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.878865 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gx64\" (UniqueName: \"kubernetes.io/projected/8030103a-2e5a-4035-9c42-b64787681b23-kube-api-access-5gx64\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.878930 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.878960 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsrdb\" (UniqueName: \"kubernetes.io/projected/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62-kube-api-access-xsrdb\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.878978 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8030103a-2e5a-4035-9c42-b64787681b23-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.878994 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.879010 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8030103a-2e5a-4035-9c42-b64787681b23-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.880112 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-client-ca\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.880333 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-config\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.886187 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59718852-dd08-40dc-8b71-293e9b12f92d-serving-cert\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:04 crc kubenswrapper[4814]: I0216 09:49:04.903373 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8x6m\" (UniqueName: \"kubernetes.io/projected/59718852-dd08-40dc-8b71-293e9b12f92d-kube-api-access-z8x6m\") pod \"route-controller-manager-744c6c946d-2lw6z\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.009691 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.162549 4814 generic.go:334] "Generic (PLEG): container finished" podID="8030103a-2e5a-4035-9c42-b64787681b23" containerID="ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f" exitCode=0 Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.162656 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.162689 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" event={"ID":"8030103a-2e5a-4035-9c42-b64787681b23","Type":"ContainerDied","Data":"ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f"} Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.163827 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t" event={"ID":"8030103a-2e5a-4035-9c42-b64787681b23","Type":"ContainerDied","Data":"374cce71a53ab6b3dde6522e9c1d24242683fc72d7edb1fac1e09fecc141ff9b"} Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.163854 4814 scope.go:117] "RemoveContainer" containerID="ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f" Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.166430 4814 generic.go:334] "Generic (PLEG): container finished" podID="e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" containerID="00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f" exitCode=0 Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.166487 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" event={"ID":"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62","Type":"ContainerDied","Data":"00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f"} Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.166523 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" event={"ID":"e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62","Type":"ContainerDied","Data":"d6c3ab7749a4472c9fe00608539ca4a734367022de1cfad0f687fad14167acef"} Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.166555 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.184398 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t"] Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.188939 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6867996bd5-m6z5t"] Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.206830 4814 scope.go:117] "RemoveContainer" containerID="ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f" Feb 16 09:49:05 crc kubenswrapper[4814]: E0216 09:49:05.207515 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f\": container with ID starting with ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f not found: ID does not exist" containerID="ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f" Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.207611 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f"} err="failed to get container status \"ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f\": rpc error: code = NotFound desc = could not find container \"ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f\": container with ID starting with ce84d0909c7c290c0e7748c10da7f4913f8c3ca31c19f548fc5001c50c455f4f not found: ID does not exist" Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.207679 4814 scope.go:117] "RemoveContainer" containerID="00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f" Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.209284 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-844596c887-hwf5q"] Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.211785 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-844596c887-hwf5q"] Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.235366 4814 scope.go:117] "RemoveContainer" containerID="00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f" Feb 16 09:49:05 crc kubenswrapper[4814]: E0216 09:49:05.235925 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f\": container with ID starting with 00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f not found: ID does not exist" containerID="00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f" Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.235960 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f"} err="failed to get container status \"00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f\": rpc error: code = NotFound desc = could not find container \"00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f\": container with ID starting with 00f27f4d7c9285ebca74fb03b79fc66c144eefc7a162facf261954f8af500e5f not found: ID does not exist" Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.242361 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z"] Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.583405 4814 patch_prober.go:28] interesting pod/controller-manager-844596c887-hwf5q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 09:49:05 crc kubenswrapper[4814]: I0216 09:49:05.583498 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-844596c887-hwf5q" podUID="e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.106488 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.113790 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.116329 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.116885 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.122702 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.175037 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" event={"ID":"59718852-dd08-40dc-8b71-293e9b12f92d","Type":"ContainerStarted","Data":"e4ee6060d32b3e30369c779fdb294fa44306c9f6d5e1f5cfbc1bf55f44e9a0a1"} Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.175083 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" event={"ID":"59718852-dd08-40dc-8b71-293e9b12f92d","Type":"ContainerStarted","Data":"60c39133086e2eb83e8c517effe634a21cd03707d996a9e7165923c869e43e2a"} Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.175294 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.181261 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.194426 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" podStartSLOduration=4.194400794 podStartE2EDuration="4.194400794s" podCreationTimestamp="2026-02-16 09:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:49:06.19047316 +0000 UTC m=+203.883629340" watchObservedRunningTime="2026-02-16 09:49:06.194400794 +0000 UTC m=+203.887556974" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.299612 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/973b7307-2ce7-4e46-8569-e08148a94952-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"973b7307-2ce7-4e46-8569-e08148a94952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.299712 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/973b7307-2ce7-4e46-8569-e08148a94952-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"973b7307-2ce7-4e46-8569-e08148a94952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.401207 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/973b7307-2ce7-4e46-8569-e08148a94952-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"973b7307-2ce7-4e46-8569-e08148a94952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.401315 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/973b7307-2ce7-4e46-8569-e08148a94952-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"973b7307-2ce7-4e46-8569-e08148a94952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.401323 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/973b7307-2ce7-4e46-8569-e08148a94952-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"973b7307-2ce7-4e46-8569-e08148a94952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.423172 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/973b7307-2ce7-4e46-8569-e08148a94952-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"973b7307-2ce7-4e46-8569-e08148a94952\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.436374 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.659652 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.956444 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69497794d-p6htt"] Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.961683 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.964440 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.966942 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.967058 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.967815 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.968194 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.968816 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.978781 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 09:49:06 crc kubenswrapper[4814]: I0216 09:49:06.980337 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69497794d-p6htt"] Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.001810 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8030103a-2e5a-4035-9c42-b64787681b23" path="/var/lib/kubelet/pods/8030103a-2e5a-4035-9c42-b64787681b23/volumes" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.002808 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62" path="/var/lib/kubelet/pods/e6cd24fe-5e38-48d6-a9b2-7c56d8d08e62/volumes" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.114824 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-proxy-ca-bundles\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.114880 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f9t8\" (UniqueName: \"kubernetes.io/projected/0dc61033-bee9-4f4e-a353-5c0789bd016b-kube-api-access-2f9t8\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.114977 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-config\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.115025 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dc61033-bee9-4f4e-a353-5c0789bd016b-serving-cert\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.115064 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-client-ca\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.184842 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"973b7307-2ce7-4e46-8569-e08148a94952","Type":"ContainerStarted","Data":"174e790e73085f31e42d4b3bdc7bd70948133d3f38d2f9a036236954c6746d59"} Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.184895 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"973b7307-2ce7-4e46-8569-e08148a94952","Type":"ContainerStarted","Data":"a84688bd2413273338639a722f7310ea6955473bc58f51699948276637e9c714"} Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.202660 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.202639047 podStartE2EDuration="1.202639047s" podCreationTimestamp="2026-02-16 09:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:49:07.20035762 +0000 UTC m=+204.893513800" watchObservedRunningTime="2026-02-16 09:49:07.202639047 +0000 UTC m=+204.895795227" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.216313 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-proxy-ca-bundles\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.216380 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f9t8\" (UniqueName: \"kubernetes.io/projected/0dc61033-bee9-4f4e-a353-5c0789bd016b-kube-api-access-2f9t8\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.216419 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-config\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.216447 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dc61033-bee9-4f4e-a353-5c0789bd016b-serving-cert\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.216486 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-client-ca\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.217701 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-proxy-ca-bundles\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.217764 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-client-ca\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.218427 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-config\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.226008 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dc61033-bee9-4f4e-a353-5c0789bd016b-serving-cert\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.237351 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f9t8\" (UniqueName: \"kubernetes.io/projected/0dc61033-bee9-4f4e-a353-5c0789bd016b-kube-api-access-2f9t8\") pod \"controller-manager-69497794d-p6htt\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.297308 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.551941 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69497794d-p6htt"] Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.960326 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.960839 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.960929 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.961875 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 09:49:07 crc kubenswrapper[4814]: I0216 09:49:07.961957 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a" gracePeriod=600 Feb 16 09:49:08 crc kubenswrapper[4814]: I0216 09:49:08.191260 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a" exitCode=0 Feb 16 09:49:08 crc kubenswrapper[4814]: I0216 09:49:08.191365 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a"} Feb 16 09:49:08 crc kubenswrapper[4814]: I0216 09:49:08.193739 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" event={"ID":"0dc61033-bee9-4f4e-a353-5c0789bd016b","Type":"ContainerStarted","Data":"243b202871cd8b004674a0757f1eccd1d72df3bf6f30b40e72a02e0feaed62c8"} Feb 16 09:49:08 crc kubenswrapper[4814]: I0216 09:49:08.193773 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" event={"ID":"0dc61033-bee9-4f4e-a353-5c0789bd016b","Type":"ContainerStarted","Data":"bea257fcd0d22a287425d25529a0791cca6ddc689fdf924e90b688e2b90e820a"} Feb 16 09:49:08 crc kubenswrapper[4814]: I0216 09:49:08.194156 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:08 crc kubenswrapper[4814]: I0216 09:49:08.198202 4814 generic.go:334] "Generic (PLEG): container finished" podID="973b7307-2ce7-4e46-8569-e08148a94952" containerID="174e790e73085f31e42d4b3bdc7bd70948133d3f38d2f9a036236954c6746d59" exitCode=0 Feb 16 09:49:08 crc kubenswrapper[4814]: I0216 09:49:08.198299 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"973b7307-2ce7-4e46-8569-e08148a94952","Type":"ContainerDied","Data":"174e790e73085f31e42d4b3bdc7bd70948133d3f38d2f9a036236954c6746d59"} Feb 16 09:49:08 crc kubenswrapper[4814]: I0216 09:49:08.199081 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:08 crc kubenswrapper[4814]: I0216 09:49:08.212794 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" podStartSLOduration=6.212768225 podStartE2EDuration="6.212768225s" podCreationTimestamp="2026-02-16 09:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:49:08.210594851 +0000 UTC m=+205.903751041" watchObservedRunningTime="2026-02-16 09:49:08.212768225 +0000 UTC m=+205.905924405" Feb 16 09:49:09 crc kubenswrapper[4814]: I0216 09:49:09.206239 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"77fad05b79ecca7c319e23468d7a63b9cba584ba0b7e81b7c171315d92fc9506"} Feb 16 09:49:09 crc kubenswrapper[4814]: I0216 09:49:09.443298 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 09:49:09 crc kubenswrapper[4814]: I0216 09:49:09.457002 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/973b7307-2ce7-4e46-8569-e08148a94952-kubelet-dir\") pod \"973b7307-2ce7-4e46-8569-e08148a94952\" (UID: \"973b7307-2ce7-4e46-8569-e08148a94952\") " Feb 16 09:49:09 crc kubenswrapper[4814]: I0216 09:49:09.457221 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/973b7307-2ce7-4e46-8569-e08148a94952-kube-api-access\") pod \"973b7307-2ce7-4e46-8569-e08148a94952\" (UID: \"973b7307-2ce7-4e46-8569-e08148a94952\") " Feb 16 09:49:09 crc kubenswrapper[4814]: I0216 09:49:09.457691 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/973b7307-2ce7-4e46-8569-e08148a94952-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "973b7307-2ce7-4e46-8569-e08148a94952" (UID: "973b7307-2ce7-4e46-8569-e08148a94952"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:49:09 crc kubenswrapper[4814]: I0216 09:49:09.458578 4814 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/973b7307-2ce7-4e46-8569-e08148a94952-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:09 crc kubenswrapper[4814]: I0216 09:49:09.468942 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973b7307-2ce7-4e46-8569-e08148a94952-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "973b7307-2ce7-4e46-8569-e08148a94952" (UID: "973b7307-2ce7-4e46-8569-e08148a94952"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:09 crc kubenswrapper[4814]: I0216 09:49:09.559098 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/973b7307-2ce7-4e46-8569-e08148a94952-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:10 crc kubenswrapper[4814]: I0216 09:49:10.214845 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcts9" event={"ID":"3594c0fb-ca70-4560-ba53-a5e217a0ddf7","Type":"ContainerStarted","Data":"47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353"} Feb 16 09:49:10 crc kubenswrapper[4814]: I0216 09:49:10.216364 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 09:49:10 crc kubenswrapper[4814]: I0216 09:49:10.216410 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"973b7307-2ce7-4e46-8569-e08148a94952","Type":"ContainerDied","Data":"a84688bd2413273338639a722f7310ea6955473bc58f51699948276637e9c714"} Feb 16 09:49:10 crc kubenswrapper[4814]: I0216 09:49:10.216454 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a84688bd2413273338639a722f7310ea6955473bc58f51699948276637e9c714" Feb 16 09:49:11 crc kubenswrapper[4814]: I0216 09:49:11.223308 4814 generic.go:334] "Generic (PLEG): container finished" podID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerID="47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353" exitCode=0 Feb 16 09:49:11 crc kubenswrapper[4814]: I0216 09:49:11.223564 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcts9" event={"ID":"3594c0fb-ca70-4560-ba53-a5e217a0ddf7","Type":"ContainerDied","Data":"47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353"} Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.245637 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcts9" event={"ID":"3594c0fb-ca70-4560-ba53-a5e217a0ddf7","Type":"ContainerStarted","Data":"29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a"} Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.273862 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dcts9" podStartSLOduration=2.028363725 podStartE2EDuration="50.273837192s" podCreationTimestamp="2026-02-16 09:48:22 +0000 UTC" firstStartedPulling="2026-02-16 09:48:23.391880976 +0000 UTC m=+161.085037156" lastFinishedPulling="2026-02-16 09:49:11.637354443 +0000 UTC m=+209.330510623" observedRunningTime="2026-02-16 09:49:12.270206817 +0000 UTC m=+209.963363007" watchObservedRunningTime="2026-02-16 09:49:12.273837192 +0000 UTC m=+209.966993382" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.305051 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 09:49:12 crc kubenswrapper[4814]: E0216 09:49:12.305718 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973b7307-2ce7-4e46-8569-e08148a94952" containerName="pruner" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.306023 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="973b7307-2ce7-4e46-8569-e08148a94952" containerName="pruner" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.306263 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="973b7307-2ce7-4e46-8569-e08148a94952" containerName="pruner" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.306975 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.311606 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.312018 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.319800 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.405923 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f410ac3b-3f81-4ca4-8c09-70f312086d54-kube-api-access\") pod \"installer-9-crc\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.406002 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-var-lock\") pod \"installer-9-crc\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.406027 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.508264 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-var-lock\") pod \"installer-9-crc\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.508795 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.508584 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-var-lock\") pod \"installer-9-crc\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.508940 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.508964 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f410ac3b-3f81-4ca4-8c09-70f312086d54-kube-api-access\") pod \"installer-9-crc\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.532450 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f410ac3b-3f81-4ca4-8c09-70f312086d54-kube-api-access\") pod \"installer-9-crc\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.632856 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.832181 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.832635 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:49:12 crc kubenswrapper[4814]: I0216 09:49:12.883917 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 09:49:12 crc kubenswrapper[4814]: W0216 09:49:12.893382 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf410ac3b_3f81_4ca4_8c09_70f312086d54.slice/crio-0b8d03812af70bfccda0ae4e8b16c5c0d06baff9e6e5300f465872af78f0dc32 WatchSource:0}: Error finding container 0b8d03812af70bfccda0ae4e8b16c5c0d06baff9e6e5300f465872af78f0dc32: Status 404 returned error can't find the container with id 0b8d03812af70bfccda0ae4e8b16c5c0d06baff9e6e5300f465872af78f0dc32 Feb 16 09:49:13 crc kubenswrapper[4814]: I0216 09:49:13.255759 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lsc9" event={"ID":"1093e2eb-672e-4aae-8ee6-ffc390592ff8","Type":"ContainerStarted","Data":"88f4bb54b9fbaa8d432e344b6c0329ee08525bb8204d842d6cb996a073303adc"} Feb 16 09:49:13 crc kubenswrapper[4814]: I0216 09:49:13.257660 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f410ac3b-3f81-4ca4-8c09-70f312086d54","Type":"ContainerStarted","Data":"0b8d03812af70bfccda0ae4e8b16c5c0d06baff9e6e5300f465872af78f0dc32"} Feb 16 09:49:14 crc kubenswrapper[4814]: I0216 09:49:14.027245 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vv6v6"] Feb 16 09:49:14 crc kubenswrapper[4814]: I0216 09:49:14.117885 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dcts9" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerName="registry-server" probeResult="failure" output=< Feb 16 09:49:14 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 09:49:14 crc kubenswrapper[4814]: > Feb 16 09:49:14 crc kubenswrapper[4814]: I0216 09:49:14.266279 4814 generic.go:334] "Generic (PLEG): container finished" podID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" containerID="88f4bb54b9fbaa8d432e344b6c0329ee08525bb8204d842d6cb996a073303adc" exitCode=0 Feb 16 09:49:14 crc kubenswrapper[4814]: I0216 09:49:14.266379 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lsc9" event={"ID":"1093e2eb-672e-4aae-8ee6-ffc390592ff8","Type":"ContainerDied","Data":"88f4bb54b9fbaa8d432e344b6c0329ee08525bb8204d842d6cb996a073303adc"} Feb 16 09:49:14 crc kubenswrapper[4814]: I0216 09:49:14.268172 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f410ac3b-3f81-4ca4-8c09-70f312086d54","Type":"ContainerStarted","Data":"e118e0999623ce694677e816b1ab8532f148f24c0b43bb46c44dff8b3d97852f"} Feb 16 09:49:14 crc kubenswrapper[4814]: I0216 09:49:14.315273 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.315246583 podStartE2EDuration="2.315246583s" podCreationTimestamp="2026-02-16 09:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:49:14.313981646 +0000 UTC m=+212.007137826" watchObservedRunningTime="2026-02-16 09:49:14.315246583 +0000 UTC m=+212.008402763" Feb 16 09:49:15 crc kubenswrapper[4814]: I0216 09:49:15.286571 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lsc9" event={"ID":"1093e2eb-672e-4aae-8ee6-ffc390592ff8","Type":"ContainerStarted","Data":"d0ce7e5857db456db6cc2b7c3052de42dcfac355a579ec68d7f876a4141616c6"} Feb 16 09:49:15 crc kubenswrapper[4814]: I0216 09:49:15.295432 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz6kn" event={"ID":"afb2178d-394e-4d6b-baf0-8242e537aa1e","Type":"ContainerStarted","Data":"382e33f4260ddb31c9c26640c6440ad298f96b2ae86a96314f217855b9454dd0"} Feb 16 09:49:15 crc kubenswrapper[4814]: I0216 09:49:15.303482 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gg2x" event={"ID":"231208dc-d685-4a03-935e-ac1f6c6f7bf4","Type":"ContainerStarted","Data":"74a927d0b14bec23f5ba76466aa4d9df75242d045b078c92e65a77345a2f0064"} Feb 16 09:49:15 crc kubenswrapper[4814]: I0216 09:49:15.317840 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8lsc9" podStartSLOduration=3.001516475 podStartE2EDuration="53.317811842s" podCreationTimestamp="2026-02-16 09:48:22 +0000 UTC" firstStartedPulling="2026-02-16 09:48:24.459991458 +0000 UTC m=+162.153147638" lastFinishedPulling="2026-02-16 09:49:14.776286825 +0000 UTC m=+212.469443005" observedRunningTime="2026-02-16 09:49:15.309654984 +0000 UTC m=+213.002811164" watchObservedRunningTime="2026-02-16 09:49:15.317811842 +0000 UTC m=+213.010968032" Feb 16 09:49:16 crc kubenswrapper[4814]: I0216 09:49:16.310732 4814 generic.go:334] "Generic (PLEG): container finished" podID="afb2178d-394e-4d6b-baf0-8242e537aa1e" containerID="382e33f4260ddb31c9c26640c6440ad298f96b2ae86a96314f217855b9454dd0" exitCode=0 Feb 16 09:49:16 crc kubenswrapper[4814]: I0216 09:49:16.310807 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz6kn" event={"ID":"afb2178d-394e-4d6b-baf0-8242e537aa1e","Type":"ContainerDied","Data":"382e33f4260ddb31c9c26640c6440ad298f96b2ae86a96314f217855b9454dd0"} Feb 16 09:49:16 crc kubenswrapper[4814]: I0216 09:49:16.314345 4814 generic.go:334] "Generic (PLEG): container finished" podID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" containerID="74a927d0b14bec23f5ba76466aa4d9df75242d045b078c92e65a77345a2f0064" exitCode=0 Feb 16 09:49:16 crc kubenswrapper[4814]: I0216 09:49:16.314387 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gg2x" event={"ID":"231208dc-d685-4a03-935e-ac1f6c6f7bf4","Type":"ContainerDied","Data":"74a927d0b14bec23f5ba76466aa4d9df75242d045b078c92e65a77345a2f0064"} Feb 16 09:49:16 crc kubenswrapper[4814]: I0216 09:49:16.314417 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gg2x" event={"ID":"231208dc-d685-4a03-935e-ac1f6c6f7bf4","Type":"ContainerStarted","Data":"29a3c4a5ab26d39b2fc535790baf241d754c3ac36e23a1d3a95c5934304a4f6d"} Feb 16 09:49:16 crc kubenswrapper[4814]: I0216 09:49:16.364318 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8gg2x" podStartSLOduration=1.9906939609999998 podStartE2EDuration="53.364292247s" podCreationTimestamp="2026-02-16 09:48:23 +0000 UTC" firstStartedPulling="2026-02-16 09:48:24.434686585 +0000 UTC m=+162.127842765" lastFinishedPulling="2026-02-16 09:49:15.808284881 +0000 UTC m=+213.501441051" observedRunningTime="2026-02-16 09:49:16.360223649 +0000 UTC m=+214.053379829" watchObservedRunningTime="2026-02-16 09:49:16.364292247 +0000 UTC m=+214.057448427" Feb 16 09:49:17 crc kubenswrapper[4814]: I0216 09:49:17.328277 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khzgg" event={"ID":"37e44ee2-4f8c-44f7-9428-966356c68a90","Type":"ContainerStarted","Data":"023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f"} Feb 16 09:49:17 crc kubenswrapper[4814]: I0216 09:49:17.331282 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz6kn" event={"ID":"afb2178d-394e-4d6b-baf0-8242e537aa1e","Type":"ContainerStarted","Data":"acb6c8969b84de9afd1228ed85cc5a06a80013bd0b159352fd749b9fc82106b5"} Feb 16 09:49:17 crc kubenswrapper[4814]: I0216 09:49:17.332868 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngqwc" event={"ID":"b772d6e0-ae59-4ddb-b5f8-301ac88ec747","Type":"ContainerStarted","Data":"30805689cff971906f23f442187acc37ead20fa3028cda5c66f0dbc1391f4c94"} Feb 16 09:49:17 crc kubenswrapper[4814]: I0216 09:49:17.403224 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kz6kn" podStartSLOduration=4.43653487 podStartE2EDuration="52.403194012s" podCreationTimestamp="2026-02-16 09:48:25 +0000 UTC" firstStartedPulling="2026-02-16 09:48:28.730812365 +0000 UTC m=+166.423968545" lastFinishedPulling="2026-02-16 09:49:16.697471507 +0000 UTC m=+214.390627687" observedRunningTime="2026-02-16 09:49:17.401437742 +0000 UTC m=+215.094593922" watchObservedRunningTime="2026-02-16 09:49:17.403194012 +0000 UTC m=+215.096350192" Feb 16 09:49:18 crc kubenswrapper[4814]: I0216 09:49:18.340081 4814 generic.go:334] "Generic (PLEG): container finished" podID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerID="30805689cff971906f23f442187acc37ead20fa3028cda5c66f0dbc1391f4c94" exitCode=0 Feb 16 09:49:18 crc kubenswrapper[4814]: I0216 09:49:18.341188 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngqwc" event={"ID":"b772d6e0-ae59-4ddb-b5f8-301ac88ec747","Type":"ContainerDied","Data":"30805689cff971906f23f442187acc37ead20fa3028cda5c66f0dbc1391f4c94"} Feb 16 09:49:18 crc kubenswrapper[4814]: I0216 09:49:18.352141 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88g2q" event={"ID":"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c","Type":"ContainerStarted","Data":"cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286"} Feb 16 09:49:18 crc kubenswrapper[4814]: I0216 09:49:18.362184 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcv4r" event={"ID":"0857fc2a-4cdb-4f97-aca4-20a08fc1060a","Type":"ContainerStarted","Data":"a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0"} Feb 16 09:49:19 crc kubenswrapper[4814]: I0216 09:49:19.372989 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcv4r" event={"ID":"0857fc2a-4cdb-4f97-aca4-20a08fc1060a","Type":"ContainerDied","Data":"a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0"} Feb 16 09:49:19 crc kubenswrapper[4814]: I0216 09:49:19.372912 4814 generic.go:334] "Generic (PLEG): container finished" podID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerID="a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0" exitCode=0 Feb 16 09:49:19 crc kubenswrapper[4814]: I0216 09:49:19.376892 4814 generic.go:334] "Generic (PLEG): container finished" podID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerID="023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f" exitCode=0 Feb 16 09:49:19 crc kubenswrapper[4814]: I0216 09:49:19.376970 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khzgg" event={"ID":"37e44ee2-4f8c-44f7-9428-966356c68a90","Type":"ContainerDied","Data":"023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f"} Feb 16 09:49:19 crc kubenswrapper[4814]: I0216 09:49:19.386484 4814 generic.go:334] "Generic (PLEG): container finished" podID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerID="cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286" exitCode=0 Feb 16 09:49:19 crc kubenswrapper[4814]: I0216 09:49:19.386578 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88g2q" event={"ID":"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c","Type":"ContainerDied","Data":"cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286"} Feb 16 09:49:21 crc kubenswrapper[4814]: I0216 09:49:21.403595 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngqwc" event={"ID":"b772d6e0-ae59-4ddb-b5f8-301ac88ec747","Type":"ContainerStarted","Data":"38327da15d5e694ed57a377d1771379a33d904acd6627e8acd4bb22ba2c41bd3"} Feb 16 09:49:22 crc kubenswrapper[4814]: I0216 09:49:22.431564 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ngqwc" podStartSLOduration=4.495955755 podStartE2EDuration="1m0.43149789s" podCreationTimestamp="2026-02-16 09:48:22 +0000 UTC" firstStartedPulling="2026-02-16 09:48:24.438487667 +0000 UTC m=+162.131643847" lastFinishedPulling="2026-02-16 09:49:20.374029802 +0000 UTC m=+218.067185982" observedRunningTime="2026-02-16 09:49:22.430565933 +0000 UTC m=+220.123722113" watchObservedRunningTime="2026-02-16 09:49:22.43149789 +0000 UTC m=+220.124654100" Feb 16 09:49:22 crc kubenswrapper[4814]: I0216 09:49:22.591851 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z"] Feb 16 09:49:22 crc kubenswrapper[4814]: I0216 09:49:22.592234 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" podUID="59718852-dd08-40dc-8b71-293e9b12f92d" containerName="route-controller-manager" containerID="cri-o://e4ee6060d32b3e30369c779fdb294fa44306c9f6d5e1f5cfbc1bf55f44e9a0a1" gracePeriod=30 Feb 16 09:49:22 crc kubenswrapper[4814]: I0216 09:49:22.673078 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69497794d-p6htt"] Feb 16 09:49:22 crc kubenswrapper[4814]: I0216 09:49:22.673436 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" podUID="0dc61033-bee9-4f4e-a353-5c0789bd016b" containerName="controller-manager" containerID="cri-o://243b202871cd8b004674a0757f1eccd1d72df3bf6f30b40e72a02e0feaed62c8" gracePeriod=30 Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.023614 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.023663 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.076201 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.136243 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.231226 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.231270 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.276109 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.420200 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcv4r" event={"ID":"0857fc2a-4cdb-4f97-aca4-20a08fc1060a","Type":"ContainerStarted","Data":"bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb"} Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.422433 4814 generic.go:334] "Generic (PLEG): container finished" podID="0dc61033-bee9-4f4e-a353-5c0789bd016b" containerID="243b202871cd8b004674a0757f1eccd1d72df3bf6f30b40e72a02e0feaed62c8" exitCode=0 Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.422508 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" event={"ID":"0dc61033-bee9-4f4e-a353-5c0789bd016b","Type":"ContainerDied","Data":"243b202871cd8b004674a0757f1eccd1d72df3bf6f30b40e72a02e0feaed62c8"} Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.424225 4814 generic.go:334] "Generic (PLEG): container finished" podID="59718852-dd08-40dc-8b71-293e9b12f92d" containerID="e4ee6060d32b3e30369c779fdb294fa44306c9f6d5e1f5cfbc1bf55f44e9a0a1" exitCode=0 Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.424290 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" event={"ID":"59718852-dd08-40dc-8b71-293e9b12f92d","Type":"ContainerDied","Data":"e4ee6060d32b3e30369c779fdb294fa44306c9f6d5e1f5cfbc1bf55f44e9a0a1"} Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.427630 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88g2q" event={"ID":"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c","Type":"ContainerStarted","Data":"f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7"} Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.450527 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jcv4r" podStartSLOduration=4.097375271 podStartE2EDuration="59.450503126s" podCreationTimestamp="2026-02-16 09:48:24 +0000 UTC" firstStartedPulling="2026-02-16 09:48:26.598045611 +0000 UTC m=+164.291201791" lastFinishedPulling="2026-02-16 09:49:21.951173456 +0000 UTC m=+219.644329646" observedRunningTime="2026-02-16 09:49:23.44686332 +0000 UTC m=+221.140019500" watchObservedRunningTime="2026-02-16 09:49:23.450503126 +0000 UTC m=+221.143659306" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.450831 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.450870 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.483877 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.507999 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.920024 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.961848 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-795ff79796-hcd8h"] Feb 16 09:49:23 crc kubenswrapper[4814]: E0216 09:49:23.962468 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc61033-bee9-4f4e-a353-5c0789bd016b" containerName="controller-manager" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.962483 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc61033-bee9-4f4e-a353-5c0789bd016b" containerName="controller-manager" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.962746 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dc61033-bee9-4f4e-a353-5c0789bd016b" containerName="controller-manager" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.963244 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.982322 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.996518 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-client-ca\") pod \"0dc61033-bee9-4f4e-a353-5c0789bd016b\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.996652 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f9t8\" (UniqueName: \"kubernetes.io/projected/0dc61033-bee9-4f4e-a353-5c0789bd016b-kube-api-access-2f9t8\") pod \"0dc61033-bee9-4f4e-a353-5c0789bd016b\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.996719 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-proxy-ca-bundles\") pod \"0dc61033-bee9-4f4e-a353-5c0789bd016b\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.996781 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-config\") pod \"0dc61033-bee9-4f4e-a353-5c0789bd016b\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.996882 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dc61033-bee9-4f4e-a353-5c0789bd016b-serving-cert\") pod \"0dc61033-bee9-4f4e-a353-5c0789bd016b\" (UID: \"0dc61033-bee9-4f4e-a353-5c0789bd016b\") " Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.998295 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-client-ca" (OuterVolumeSpecName: "client-ca") pod "0dc61033-bee9-4f4e-a353-5c0789bd016b" (UID: "0dc61033-bee9-4f4e-a353-5c0789bd016b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.998729 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0dc61033-bee9-4f4e-a353-5c0789bd016b" (UID: "0dc61033-bee9-4f4e-a353-5c0789bd016b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:23 crc kubenswrapper[4814]: I0216 09:49:23.999163 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-config" (OuterVolumeSpecName: "config") pod "0dc61033-bee9-4f4e-a353-5c0789bd016b" (UID: "0dc61033-bee9-4f4e-a353-5c0789bd016b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.004947 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dc61033-bee9-4f4e-a353-5c0789bd016b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0dc61033-bee9-4f4e-a353-5c0789bd016b" (UID: "0dc61033-bee9-4f4e-a353-5c0789bd016b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.006526 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc61033-bee9-4f4e-a353-5c0789bd016b-kube-api-access-2f9t8" (OuterVolumeSpecName: "kube-api-access-2f9t8") pod "0dc61033-bee9-4f4e-a353-5c0789bd016b" (UID: "0dc61033-bee9-4f4e-a353-5c0789bd016b"). InnerVolumeSpecName "kube-api-access-2f9t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.027263 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-795ff79796-hcd8h"] Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.067511 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ngqwc" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerName="registry-server" probeResult="failure" output=< Feb 16 09:49:24 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 09:49:24 crc kubenswrapper[4814]: > Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.098348 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-client-ca\") pod \"59718852-dd08-40dc-8b71-293e9b12f92d\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.098461 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8x6m\" (UniqueName: \"kubernetes.io/projected/59718852-dd08-40dc-8b71-293e9b12f92d-kube-api-access-z8x6m\") pod \"59718852-dd08-40dc-8b71-293e9b12f92d\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.098567 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-config\") pod \"59718852-dd08-40dc-8b71-293e9b12f92d\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.098598 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59718852-dd08-40dc-8b71-293e9b12f92d-serving-cert\") pod \"59718852-dd08-40dc-8b71-293e9b12f92d\" (UID: \"59718852-dd08-40dc-8b71-293e9b12f92d\") " Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.098831 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9487fcd1-54b9-46fa-8204-157a532b9df0-serving-cert\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.098855 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ffnf\" (UniqueName: \"kubernetes.io/projected/9487fcd1-54b9-46fa-8204-157a532b9df0-kube-api-access-7ffnf\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.098963 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-proxy-ca-bundles\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.098991 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-client-ca\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.099049 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-config\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.099101 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f9t8\" (UniqueName: \"kubernetes.io/projected/0dc61033-bee9-4f4e-a353-5c0789bd016b-kube-api-access-2f9t8\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.099113 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.099126 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.099139 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0dc61033-bee9-4f4e-a353-5c0789bd016b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.099148 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0dc61033-bee9-4f4e-a353-5c0789bd016b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.100579 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-config" (OuterVolumeSpecName: "config") pod "59718852-dd08-40dc-8b71-293e9b12f92d" (UID: "59718852-dd08-40dc-8b71-293e9b12f92d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.100759 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-client-ca" (OuterVolumeSpecName: "client-ca") pod "59718852-dd08-40dc-8b71-293e9b12f92d" (UID: "59718852-dd08-40dc-8b71-293e9b12f92d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.106771 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59718852-dd08-40dc-8b71-293e9b12f92d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59718852-dd08-40dc-8b71-293e9b12f92d" (UID: "59718852-dd08-40dc-8b71-293e9b12f92d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.106845 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59718852-dd08-40dc-8b71-293e9b12f92d-kube-api-access-z8x6m" (OuterVolumeSpecName: "kube-api-access-z8x6m") pod "59718852-dd08-40dc-8b71-293e9b12f92d" (UID: "59718852-dd08-40dc-8b71-293e9b12f92d"). InnerVolumeSpecName "kube-api-access-z8x6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.200426 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-client-ca\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.200618 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-config\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.201618 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-client-ca\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.201666 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9487fcd1-54b9-46fa-8204-157a532b9df0-serving-cert\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.201777 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ffnf\" (UniqueName: \"kubernetes.io/projected/9487fcd1-54b9-46fa-8204-157a532b9df0-kube-api-access-7ffnf\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.202027 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-proxy-ca-bundles\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.202067 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-config\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.202274 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.202295 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59718852-dd08-40dc-8b71-293e9b12f92d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.202312 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59718852-dd08-40dc-8b71-293e9b12f92d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.202323 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8x6m\" (UniqueName: \"kubernetes.io/projected/59718852-dd08-40dc-8b71-293e9b12f92d-kube-api-access-z8x6m\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.204059 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-proxy-ca-bundles\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.206216 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9487fcd1-54b9-46fa-8204-157a532b9df0-serving-cert\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.221102 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ffnf\" (UniqueName: \"kubernetes.io/projected/9487fcd1-54b9-46fa-8204-157a532b9df0-kube-api-access-7ffnf\") pod \"controller-manager-795ff79796-hcd8h\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.293651 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.436229 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khzgg" event={"ID":"37e44ee2-4f8c-44f7-9428-966356c68a90","Type":"ContainerStarted","Data":"1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9"} Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.438120 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.438120 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z" event={"ID":"59718852-dd08-40dc-8b71-293e9b12f92d","Type":"ContainerDied","Data":"60c39133086e2eb83e8c517effe634a21cd03707d996a9e7165923c869e43e2a"} Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.438267 4814 scope.go:117] "RemoveContainer" containerID="e4ee6060d32b3e30369c779fdb294fa44306c9f6d5e1f5cfbc1bf55f44e9a0a1" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.454160 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.465438 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69497794d-p6htt" event={"ID":"0dc61033-bee9-4f4e-a353-5c0789bd016b","Type":"ContainerDied","Data":"bea257fcd0d22a287425d25529a0791cca6ddc689fdf924e90b688e2b90e820a"} Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.466907 4814 scope.go:117] "RemoveContainer" containerID="243b202871cd8b004674a0757f1eccd1d72df3bf6f30b40e72a02e0feaed62c8" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.484640 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-khzgg" podStartSLOduration=3.562460127 podStartE2EDuration="58.484619991s" podCreationTimestamp="2026-02-16 09:48:26 +0000 UTC" firstStartedPulling="2026-02-16 09:48:28.670659285 +0000 UTC m=+166.363815465" lastFinishedPulling="2026-02-16 09:49:23.592819149 +0000 UTC m=+221.285975329" observedRunningTime="2026-02-16 09:49:24.476183486 +0000 UTC m=+222.169339676" watchObservedRunningTime="2026-02-16 09:49:24.484619991 +0000 UTC m=+222.177776171" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.528953 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-88g2q" podStartSLOduration=4.053412827 podStartE2EDuration="1m0.528912571s" podCreationTimestamp="2026-02-16 09:48:24 +0000 UTC" firstStartedPulling="2026-02-16 09:48:26.619410329 +0000 UTC m=+164.312566509" lastFinishedPulling="2026-02-16 09:49:23.094910073 +0000 UTC m=+220.788066253" observedRunningTime="2026-02-16 09:49:24.504399237 +0000 UTC m=+222.197555417" watchObservedRunningTime="2026-02-16 09:49:24.528912571 +0000 UTC m=+222.222068751" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.557058 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z"] Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.567275 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-744c6c946d-2lw6z"] Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.573401 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.593328 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69497794d-p6htt"] Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.606379 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69497794d-p6htt"] Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.637300 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-795ff79796-hcd8h"] Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.828509 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:49:24 crc kubenswrapper[4814]: I0216 09:49:24.828615 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:49:25 crc kubenswrapper[4814]: I0216 09:49:25.000858 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc61033-bee9-4f4e-a353-5c0789bd016b" path="/var/lib/kubelet/pods/0dc61033-bee9-4f4e-a353-5c0789bd016b/volumes" Feb 16 09:49:25 crc kubenswrapper[4814]: I0216 09:49:25.001825 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59718852-dd08-40dc-8b71-293e9b12f92d" path="/var/lib/kubelet/pods/59718852-dd08-40dc-8b71-293e9b12f92d/volumes" Feb 16 09:49:25 crc kubenswrapper[4814]: I0216 09:49:25.251668 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:49:25 crc kubenswrapper[4814]: I0216 09:49:25.252140 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:49:25 crc kubenswrapper[4814]: I0216 09:49:25.460836 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" event={"ID":"9487fcd1-54b9-46fa-8204-157a532b9df0","Type":"ContainerStarted","Data":"29a2b8af0249d124f0bc7f4a89409dd4176acba6f0c202c02069bf01874315aa"} Feb 16 09:49:25 crc kubenswrapper[4814]: I0216 09:49:25.460892 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" event={"ID":"9487fcd1-54b9-46fa-8204-157a532b9df0","Type":"ContainerStarted","Data":"af83c7456081e7eea2844385d0430871b306b87b3da468b00819296b634b88f5"} Feb 16 09:49:25 crc kubenswrapper[4814]: I0216 09:49:25.461770 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:25 crc kubenswrapper[4814]: I0216 09:49:25.467948 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:25 crc kubenswrapper[4814]: I0216 09:49:25.497687 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" podStartSLOduration=3.497669924 podStartE2EDuration="3.497669924s" podCreationTimestamp="2026-02-16 09:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:49:25.489475796 +0000 UTC m=+223.182631976" watchObservedRunningTime="2026-02-16 09:49:25.497669924 +0000 UTC m=+223.190826104" Feb 16 09:49:25 crc kubenswrapper[4814]: I0216 09:49:25.886788 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jcv4r" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerName="registry-server" probeResult="failure" output=< Feb 16 09:49:25 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 09:49:25 crc kubenswrapper[4814]: > Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.261908 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lsc9"] Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.262213 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8lsc9" podUID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" containerName="registry-server" containerID="cri-o://d0ce7e5857db456db6cc2b7c3052de42dcfac355a579ec68d7f876a4141616c6" gracePeriod=2 Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.303075 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-88g2q" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerName="registry-server" probeResult="failure" output=< Feb 16 09:49:26 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 09:49:26 crc kubenswrapper[4814]: > Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.343076 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.344820 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.409299 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.532276 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.650196 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.651704 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.932171 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm"] Feb 16 09:49:26 crc kubenswrapper[4814]: E0216 09:49:26.934466 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59718852-dd08-40dc-8b71-293e9b12f92d" containerName="route-controller-manager" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.934610 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="59718852-dd08-40dc-8b71-293e9b12f92d" containerName="route-controller-manager" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.934936 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="59718852-dd08-40dc-8b71-293e9b12f92d" containerName="route-controller-manager" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.935994 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.939521 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.940072 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.940738 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.941200 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.941312 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.942315 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 09:49:26 crc kubenswrapper[4814]: I0216 09:49:26.944488 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm"] Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.066834 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-serving-cert\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.066922 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-config\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.067018 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjr8l\" (UniqueName: \"kubernetes.io/projected/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-kube-api-access-sjr8l\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.067074 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-client-ca\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.168911 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-serving-cert\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.169039 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-config\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.169101 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjr8l\" (UniqueName: \"kubernetes.io/projected/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-kube-api-access-sjr8l\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.169162 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-client-ca\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.170360 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-client-ca\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.171223 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-config\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.177300 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-serving-cert\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.186856 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjr8l\" (UniqueName: \"kubernetes.io/projected/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-kube-api-access-sjr8l\") pod \"route-controller-manager-6bbc7bc859-6z2pm\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.265867 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.484308 4814 generic.go:334] "Generic (PLEG): container finished" podID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" containerID="d0ce7e5857db456db6cc2b7c3052de42dcfac355a579ec68d7f876a4141616c6" exitCode=0 Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.484404 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lsc9" event={"ID":"1093e2eb-672e-4aae-8ee6-ffc390592ff8","Type":"ContainerDied","Data":"d0ce7e5857db456db6cc2b7c3052de42dcfac355a579ec68d7f876a4141616c6"} Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.582195 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.675200 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9456p\" (UniqueName: \"kubernetes.io/projected/1093e2eb-672e-4aae-8ee6-ffc390592ff8-kube-api-access-9456p\") pod \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.675287 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-catalog-content\") pod \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.675372 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-utilities\") pod \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\" (UID: \"1093e2eb-672e-4aae-8ee6-ffc390592ff8\") " Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.676992 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-utilities" (OuterVolumeSpecName: "utilities") pod "1093e2eb-672e-4aae-8ee6-ffc390592ff8" (UID: "1093e2eb-672e-4aae-8ee6-ffc390592ff8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.684073 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1093e2eb-672e-4aae-8ee6-ffc390592ff8-kube-api-access-9456p" (OuterVolumeSpecName: "kube-api-access-9456p") pod "1093e2eb-672e-4aae-8ee6-ffc390592ff8" (UID: "1093e2eb-672e-4aae-8ee6-ffc390592ff8"). InnerVolumeSpecName "kube-api-access-9456p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.696933 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-khzgg" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerName="registry-server" probeResult="failure" output=< Feb 16 09:49:27 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 09:49:27 crc kubenswrapper[4814]: > Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.729774 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1093e2eb-672e-4aae-8ee6-ffc390592ff8" (UID: "1093e2eb-672e-4aae-8ee6-ffc390592ff8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.781745 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9456p\" (UniqueName: \"kubernetes.io/projected/1093e2eb-672e-4aae-8ee6-ffc390592ff8-kube-api-access-9456p\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.781814 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.781831 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1093e2eb-672e-4aae-8ee6-ffc390592ff8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:27 crc kubenswrapper[4814]: I0216 09:49:27.791272 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm"] Feb 16 09:49:27 crc kubenswrapper[4814]: W0216 09:49:27.806726 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20b3b5c4_e32a_4ec3_97fe_69d83a0ce5b4.slice/crio-b1e8057cd7f1901d3729d91769948b6411a192662fd3ffd7984851d35da78634 WatchSource:0}: Error finding container b1e8057cd7f1901d3729d91769948b6411a192662fd3ffd7984851d35da78634: Status 404 returned error can't find the container with id b1e8057cd7f1901d3729d91769948b6411a192662fd3ffd7984851d35da78634 Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.494838 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" event={"ID":"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4","Type":"ContainerStarted","Data":"64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d"} Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.494899 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" event={"ID":"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4","Type":"ContainerStarted","Data":"b1e8057cd7f1901d3729d91769948b6411a192662fd3ffd7984851d35da78634"} Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.497057 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.500377 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lsc9" event={"ID":"1093e2eb-672e-4aae-8ee6-ffc390592ff8","Type":"ContainerDied","Data":"4899a9e184340c828e5ec61f4e228446e7915f07e0551cb3d36b704aba7c624d"} Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.500423 4814 scope.go:117] "RemoveContainer" containerID="d0ce7e5857db456db6cc2b7c3052de42dcfac355a579ec68d7f876a4141616c6" Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.500517 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lsc9" Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.503494 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.520936 4814 scope.go:117] "RemoveContainer" containerID="88f4bb54b9fbaa8d432e344b6c0329ee08525bb8204d842d6cb996a073303adc" Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.533153 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" podStartSLOduration=6.533127554 podStartE2EDuration="6.533127554s" podCreationTimestamp="2026-02-16 09:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:49:28.525591605 +0000 UTC m=+226.218747785" watchObservedRunningTime="2026-02-16 09:49:28.533127554 +0000 UTC m=+226.226283734" Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.574112 4814 scope.go:117] "RemoveContainer" containerID="6947c94419bc7c55ae7e91bbaf59735abf33cd49b2e401aca0c18d8bef233999" Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.581944 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8lsc9"] Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.583893 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8lsc9"] Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.677000 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gg2x"] Feb 16 09:49:28 crc kubenswrapper[4814]: I0216 09:49:28.677604 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8gg2x" podUID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" containerName="registry-server" containerID="cri-o://29a3c4a5ab26d39b2fc535790baf241d754c3ac36e23a1d3a95c5934304a4f6d" gracePeriod=2 Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.001943 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" path="/var/lib/kubelet/pods/1093e2eb-672e-4aae-8ee6-ffc390592ff8/volumes" Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.508365 4814 generic.go:334] "Generic (PLEG): container finished" podID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" containerID="29a3c4a5ab26d39b2fc535790baf241d754c3ac36e23a1d3a95c5934304a4f6d" exitCode=0 Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.508433 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gg2x" event={"ID":"231208dc-d685-4a03-935e-ac1f6c6f7bf4","Type":"ContainerDied","Data":"29a3c4a5ab26d39b2fc535790baf241d754c3ac36e23a1d3a95c5934304a4f6d"} Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.697700 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.814653 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-catalog-content\") pod \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.814861 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-utilities\") pod \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.814959 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rffxw\" (UniqueName: \"kubernetes.io/projected/231208dc-d685-4a03-935e-ac1f6c6f7bf4-kube-api-access-rffxw\") pod \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\" (UID: \"231208dc-d685-4a03-935e-ac1f6c6f7bf4\") " Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.817693 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-utilities" (OuterVolumeSpecName: "utilities") pod "231208dc-d685-4a03-935e-ac1f6c6f7bf4" (UID: "231208dc-d685-4a03-935e-ac1f6c6f7bf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.822762 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231208dc-d685-4a03-935e-ac1f6c6f7bf4-kube-api-access-rffxw" (OuterVolumeSpecName: "kube-api-access-rffxw") pod "231208dc-d685-4a03-935e-ac1f6c6f7bf4" (UID: "231208dc-d685-4a03-935e-ac1f6c6f7bf4"). InnerVolumeSpecName "kube-api-access-rffxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.874094 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "231208dc-d685-4a03-935e-ac1f6c6f7bf4" (UID: "231208dc-d685-4a03-935e-ac1f6c6f7bf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.916985 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.917034 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rffxw\" (UniqueName: \"kubernetes.io/projected/231208dc-d685-4a03-935e-ac1f6c6f7bf4-kube-api-access-rffxw\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:29 crc kubenswrapper[4814]: I0216 09:49:29.917047 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231208dc-d685-4a03-935e-ac1f6c6f7bf4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:30 crc kubenswrapper[4814]: I0216 09:49:30.522031 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gg2x" Feb 16 09:49:30 crc kubenswrapper[4814]: I0216 09:49:30.522028 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gg2x" event={"ID":"231208dc-d685-4a03-935e-ac1f6c6f7bf4","Type":"ContainerDied","Data":"0d131f97ea1fea35c074edd7ad43023bf850a0a649b06772102e0f80f4ac1743"} Feb 16 09:49:30 crc kubenswrapper[4814]: I0216 09:49:30.522807 4814 scope.go:117] "RemoveContainer" containerID="29a3c4a5ab26d39b2fc535790baf241d754c3ac36e23a1d3a95c5934304a4f6d" Feb 16 09:49:30 crc kubenswrapper[4814]: I0216 09:49:30.547983 4814 scope.go:117] "RemoveContainer" containerID="74a927d0b14bec23f5ba76466aa4d9df75242d045b078c92e65a77345a2f0064" Feb 16 09:49:30 crc kubenswrapper[4814]: I0216 09:49:30.563013 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gg2x"] Feb 16 09:49:30 crc kubenswrapper[4814]: I0216 09:49:30.568670 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8gg2x"] Feb 16 09:49:30 crc kubenswrapper[4814]: I0216 09:49:30.576584 4814 scope.go:117] "RemoveContainer" containerID="97ec0444972677904d6417fa31da642690b715301ec369cd861fb7f67a0052ff" Feb 16 09:49:31 crc kubenswrapper[4814]: I0216 09:49:31.003126 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" path="/var/lib/kubelet/pods/231208dc-d685-4a03-935e-ac1f6c6f7bf4/volumes" Feb 16 09:49:33 crc kubenswrapper[4814]: I0216 09:49:33.073995 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:49:33 crc kubenswrapper[4814]: I0216 09:49:33.132896 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:49:34 crc kubenswrapper[4814]: I0216 09:49:34.883440 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:49:34 crc kubenswrapper[4814]: I0216 09:49:34.924635 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:49:35 crc kubenswrapper[4814]: I0216 09:49:35.305446 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:49:35 crc kubenswrapper[4814]: I0216 09:49:35.353369 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:49:36 crc kubenswrapper[4814]: I0216 09:49:36.708652 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:49:36 crc kubenswrapper[4814]: I0216 09:49:36.757285 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:49:36 crc kubenswrapper[4814]: I0216 09:49:36.860905 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-88g2q"] Feb 16 09:49:36 crc kubenswrapper[4814]: I0216 09:49:36.861180 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-88g2q" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerName="registry-server" containerID="cri-o://f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7" gracePeriod=2 Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.286005 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.328886 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2vvq\" (UniqueName: \"kubernetes.io/projected/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-kube-api-access-p2vvq\") pod \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.328971 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-utilities\") pod \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.329006 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-catalog-content\") pod \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\" (UID: \"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c\") " Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.331051 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-utilities" (OuterVolumeSpecName: "utilities") pod "b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" (UID: "b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.336507 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-kube-api-access-p2vvq" (OuterVolumeSpecName: "kube-api-access-p2vvq") pod "b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" (UID: "b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c"). InnerVolumeSpecName "kube-api-access-p2vvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.360462 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" (UID: "b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.430631 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2vvq\" (UniqueName: \"kubernetes.io/projected/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-kube-api-access-p2vvq\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.430681 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.430696 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.575295 4814 generic.go:334] "Generic (PLEG): container finished" podID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerID="f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7" exitCode=0 Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.575347 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88g2q" event={"ID":"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c","Type":"ContainerDied","Data":"f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7"} Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.575390 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-88g2q" event={"ID":"b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c","Type":"ContainerDied","Data":"58a69ebec02a24731a27ada27ff23529a57e15269d6890490b885b5b66958a6e"} Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.575413 4814 scope.go:117] "RemoveContainer" containerID="f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.575479 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-88g2q" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.613201 4814 scope.go:117] "RemoveContainer" containerID="cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.617183 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-88g2q"] Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.627483 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-88g2q"] Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.647698 4814 scope.go:117] "RemoveContainer" containerID="ad1925c57a0b69de13136cde9952938b8521ad25b96bf61af15ca9ba8dc65f2a" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.671915 4814 scope.go:117] "RemoveContainer" containerID="f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7" Feb 16 09:49:37 crc kubenswrapper[4814]: E0216 09:49:37.673625 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7\": container with ID starting with f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7 not found: ID does not exist" containerID="f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.673666 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7"} err="failed to get container status \"f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7\": rpc error: code = NotFound desc = could not find container \"f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7\": container with ID starting with f1b665996c141e640820f7e7a259f7cf995a50952b37e51ac5f6a33c559510a7 not found: ID does not exist" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.673701 4814 scope.go:117] "RemoveContainer" containerID="cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286" Feb 16 09:49:37 crc kubenswrapper[4814]: E0216 09:49:37.674211 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286\": container with ID starting with cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286 not found: ID does not exist" containerID="cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.674246 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286"} err="failed to get container status \"cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286\": rpc error: code = NotFound desc = could not find container \"cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286\": container with ID starting with cd3d6fc81437c7756fdbb229b21b409103b550d464e59c633c6fe9ce9f53c286 not found: ID does not exist" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.674266 4814 scope.go:117] "RemoveContainer" containerID="ad1925c57a0b69de13136cde9952938b8521ad25b96bf61af15ca9ba8dc65f2a" Feb 16 09:49:37 crc kubenswrapper[4814]: E0216 09:49:37.674755 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad1925c57a0b69de13136cde9952938b8521ad25b96bf61af15ca9ba8dc65f2a\": container with ID starting with ad1925c57a0b69de13136cde9952938b8521ad25b96bf61af15ca9ba8dc65f2a not found: ID does not exist" containerID="ad1925c57a0b69de13136cde9952938b8521ad25b96bf61af15ca9ba8dc65f2a" Feb 16 09:49:37 crc kubenswrapper[4814]: I0216 09:49:37.674784 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad1925c57a0b69de13136cde9952938b8521ad25b96bf61af15ca9ba8dc65f2a"} err="failed to get container status \"ad1925c57a0b69de13136cde9952938b8521ad25b96bf61af15ca9ba8dc65f2a\": rpc error: code = NotFound desc = could not find container \"ad1925c57a0b69de13136cde9952938b8521ad25b96bf61af15ca9ba8dc65f2a\": container with ID starting with ad1925c57a0b69de13136cde9952938b8521ad25b96bf61af15ca9ba8dc65f2a not found: ID does not exist" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.006097 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" path="/var/lib/kubelet/pods/b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c/volumes" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.065201 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khzgg"] Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.065704 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-khzgg" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerName="registry-server" containerID="cri-o://1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9" gracePeriod=2 Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.090894 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" podUID="f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" containerName="oauth-openshift" containerID="cri-o://5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e" gracePeriod=15 Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.536459 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.539449 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.612563 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khzgg" event={"ID":"37e44ee2-4f8c-44f7-9428-966356c68a90","Type":"ContainerDied","Data":"1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9"} Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.612628 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-khzgg" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.612514 4814 generic.go:334] "Generic (PLEG): container finished" podID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerID="1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9" exitCode=0 Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.612634 4814 scope.go:117] "RemoveContainer" containerID="1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.613303 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-khzgg" event={"ID":"37e44ee2-4f8c-44f7-9428-966356c68a90","Type":"ContainerDied","Data":"40de042b068eaf3d4669fe9404d99a5c3f5d597216e7e842dc2106af1164f680"} Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.619962 4814 generic.go:334] "Generic (PLEG): container finished" podID="f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" containerID="5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e" exitCode=0 Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.620020 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" event={"ID":"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a","Type":"ContainerDied","Data":"5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e"} Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.620055 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" event={"ID":"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a","Type":"ContainerDied","Data":"9afb9b357836f1b42572f11eb7b19890a40405d9fe58cf763d765db6cea759a0"} Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.620128 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vv6v6" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.635266 4814 scope.go:117] "RemoveContainer" containerID="023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.668481 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-policies\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.668957 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-trusted-ca-bundle\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669003 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-cliconfig\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669038 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-service-ca\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669075 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-idp-0-file-data\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669132 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-session\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669174 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-catalog-content\") pod \"37e44ee2-4f8c-44f7-9428-966356c68a90\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669207 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-dir\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669291 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-serving-cert\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669317 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j2zb\" (UniqueName: \"kubernetes.io/projected/37e44ee2-4f8c-44f7-9428-966356c68a90-kube-api-access-4j2zb\") pod \"37e44ee2-4f8c-44f7-9428-966356c68a90\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669348 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-error\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669387 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-router-certs\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669414 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-ocp-branding-template\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669468 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-provider-selection\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669507 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgtnz\" (UniqueName: \"kubernetes.io/projected/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-kube-api-access-vgtnz\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669559 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-utilities\") pod \"37e44ee2-4f8c-44f7-9428-966356c68a90\" (UID: \"37e44ee2-4f8c-44f7-9428-966356c68a90\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.669588 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-login\") pod \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\" (UID: \"f51c8b2c-1728-4385-a7a4-f55a2f7cc18a\") " Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.671124 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.671455 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.671793 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.672790 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.672881 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.675252 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-utilities" (OuterVolumeSpecName: "utilities") pod "37e44ee2-4f8c-44f7-9428-966356c68a90" (UID: "37e44ee2-4f8c-44f7-9428-966356c68a90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.676700 4814 scope.go:117] "RemoveContainer" containerID="ff6dcacf6904c74e7860b58bb36bade35fe1559f8406d84384270e53c3d1e4e3" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.679841 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.680482 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.680725 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.680918 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-kube-api-access-vgtnz" (OuterVolumeSpecName: "kube-api-access-vgtnz") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "kube-api-access-vgtnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.680944 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.681563 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.681942 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.682875 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.684010 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" (UID: "f51c8b2c-1728-4385-a7a4-f55a2f7cc18a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.688390 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e44ee2-4f8c-44f7-9428-966356c68a90-kube-api-access-4j2zb" (OuterVolumeSpecName: "kube-api-access-4j2zb") pod "37e44ee2-4f8c-44f7-9428-966356c68a90" (UID: "37e44ee2-4f8c-44f7-9428-966356c68a90"). InnerVolumeSpecName "kube-api-access-4j2zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.700193 4814 scope.go:117] "RemoveContainer" containerID="1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9" Feb 16 09:49:39 crc kubenswrapper[4814]: E0216 09:49:39.700731 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9\": container with ID starting with 1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9 not found: ID does not exist" containerID="1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.700799 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9"} err="failed to get container status \"1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9\": rpc error: code = NotFound desc = could not find container \"1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9\": container with ID starting with 1154fe206105bb68b6c7f0dcb0d602379988d39c7218378524be46fb6df78be9 not found: ID does not exist" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.700855 4814 scope.go:117] "RemoveContainer" containerID="023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f" Feb 16 09:49:39 crc kubenswrapper[4814]: E0216 09:49:39.701287 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f\": container with ID starting with 023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f not found: ID does not exist" containerID="023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.701335 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f"} err="failed to get container status \"023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f\": rpc error: code = NotFound desc = could not find container \"023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f\": container with ID starting with 023a6031a2d6627845ef84ecedcdd63564d38d1073b4872bb676a9f3e654704f not found: ID does not exist" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.701371 4814 scope.go:117] "RemoveContainer" containerID="ff6dcacf6904c74e7860b58bb36bade35fe1559f8406d84384270e53c3d1e4e3" Feb 16 09:49:39 crc kubenswrapper[4814]: E0216 09:49:39.701794 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff6dcacf6904c74e7860b58bb36bade35fe1559f8406d84384270e53c3d1e4e3\": container with ID starting with ff6dcacf6904c74e7860b58bb36bade35fe1559f8406d84384270e53c3d1e4e3 not found: ID does not exist" containerID="ff6dcacf6904c74e7860b58bb36bade35fe1559f8406d84384270e53c3d1e4e3" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.701851 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6dcacf6904c74e7860b58bb36bade35fe1559f8406d84384270e53c3d1e4e3"} err="failed to get container status \"ff6dcacf6904c74e7860b58bb36bade35fe1559f8406d84384270e53c3d1e4e3\": rpc error: code = NotFound desc = could not find container \"ff6dcacf6904c74e7860b58bb36bade35fe1559f8406d84384270e53c3d1e4e3\": container with ID starting with ff6dcacf6904c74e7860b58bb36bade35fe1559f8406d84384270e53c3d1e4e3 not found: ID does not exist" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.701886 4814 scope.go:117] "RemoveContainer" containerID="5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.718076 4814 scope.go:117] "RemoveContainer" containerID="5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e" Feb 16 09:49:39 crc kubenswrapper[4814]: E0216 09:49:39.718474 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e\": container with ID starting with 5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e not found: ID does not exist" containerID="5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.718518 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e"} err="failed to get container status \"5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e\": rpc error: code = NotFound desc = could not find container \"5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e\": container with ID starting with 5c147b94ab9a0c48b3be170c8b9bdc5031a01662400056f2c508744d1c996c1e not found: ID does not exist" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771711 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771762 4814 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771776 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771791 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j2zb\" (UniqueName: \"kubernetes.io/projected/37e44ee2-4f8c-44f7-9428-966356c68a90-kube-api-access-4j2zb\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771806 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771823 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771835 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771847 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771861 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgtnz\" (UniqueName: \"kubernetes.io/projected/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-kube-api-access-vgtnz\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771875 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771886 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771897 4814 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771908 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771922 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771933 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.771946 4814 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.803658 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37e44ee2-4f8c-44f7-9428-966356c68a90" (UID: "37e44ee2-4f8c-44f7-9428-966356c68a90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.872855 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e44ee2-4f8c-44f7-9428-966356c68a90-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.951406 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-khzgg"] Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.961716 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-khzgg"] Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.967503 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vv6v6"] Feb 16 09:49:39 crc kubenswrapper[4814]: I0216 09:49:39.972314 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vv6v6"] Feb 16 09:49:41 crc kubenswrapper[4814]: I0216 09:49:41.009444 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" path="/var/lib/kubelet/pods/37e44ee2-4f8c-44f7-9428-966356c68a90/volumes" Feb 16 09:49:41 crc kubenswrapper[4814]: I0216 09:49:41.011278 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" path="/var/lib/kubelet/pods/f51c8b2c-1728-4385-a7a4-f55a2f7cc18a/volumes" Feb 16 09:49:42 crc kubenswrapper[4814]: I0216 09:49:42.493324 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-795ff79796-hcd8h"] Feb 16 09:49:42 crc kubenswrapper[4814]: I0216 09:49:42.493592 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" podUID="9487fcd1-54b9-46fa-8204-157a532b9df0" containerName="controller-manager" containerID="cri-o://29a2b8af0249d124f0bc7f4a89409dd4176acba6f0c202c02069bf01874315aa" gracePeriod=30 Feb 16 09:49:42 crc kubenswrapper[4814]: I0216 09:49:42.594703 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm"] Feb 16 09:49:42 crc kubenswrapper[4814]: I0216 09:49:42.595229 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" podUID="20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4" containerName="route-controller-manager" containerID="cri-o://64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d" gracePeriod=30 Feb 16 09:49:42 crc kubenswrapper[4814]: I0216 09:49:42.651962 4814 generic.go:334] "Generic (PLEG): container finished" podID="9487fcd1-54b9-46fa-8204-157a532b9df0" containerID="29a2b8af0249d124f0bc7f4a89409dd4176acba6f0c202c02069bf01874315aa" exitCode=0 Feb 16 09:49:42 crc kubenswrapper[4814]: I0216 09:49:42.652062 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" event={"ID":"9487fcd1-54b9-46fa-8204-157a532b9df0","Type":"ContainerDied","Data":"29a2b8af0249d124f0bc7f4a89409dd4176acba6f0c202c02069bf01874315aa"} Feb 16 09:49:42 crc kubenswrapper[4814]: I0216 09:49:42.938612 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:42 crc kubenswrapper[4814]: I0216 09:49:42.991452 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.018512 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9487fcd1-54b9-46fa-8204-157a532b9df0-serving-cert\") pod \"9487fcd1-54b9-46fa-8204-157a532b9df0\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.018581 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-client-ca\") pod \"9487fcd1-54b9-46fa-8204-157a532b9df0\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.018704 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ffnf\" (UniqueName: \"kubernetes.io/projected/9487fcd1-54b9-46fa-8204-157a532b9df0-kube-api-access-7ffnf\") pod \"9487fcd1-54b9-46fa-8204-157a532b9df0\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.018749 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-proxy-ca-bundles\") pod \"9487fcd1-54b9-46fa-8204-157a532b9df0\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.018846 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-config\") pod \"9487fcd1-54b9-46fa-8204-157a532b9df0\" (UID: \"9487fcd1-54b9-46fa-8204-157a532b9df0\") " Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.019714 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9487fcd1-54b9-46fa-8204-157a532b9df0" (UID: "9487fcd1-54b9-46fa-8204-157a532b9df0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.019750 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-client-ca" (OuterVolumeSpecName: "client-ca") pod "9487fcd1-54b9-46fa-8204-157a532b9df0" (UID: "9487fcd1-54b9-46fa-8204-157a532b9df0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.019852 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-config" (OuterVolumeSpecName: "config") pod "9487fcd1-54b9-46fa-8204-157a532b9df0" (UID: "9487fcd1-54b9-46fa-8204-157a532b9df0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.026667 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9487fcd1-54b9-46fa-8204-157a532b9df0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9487fcd1-54b9-46fa-8204-157a532b9df0" (UID: "9487fcd1-54b9-46fa-8204-157a532b9df0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.026769 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9487fcd1-54b9-46fa-8204-157a532b9df0-kube-api-access-7ffnf" (OuterVolumeSpecName: "kube-api-access-7ffnf") pod "9487fcd1-54b9-46fa-8204-157a532b9df0" (UID: "9487fcd1-54b9-46fa-8204-157a532b9df0"). InnerVolumeSpecName "kube-api-access-7ffnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.120704 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjr8l\" (UniqueName: \"kubernetes.io/projected/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-kube-api-access-sjr8l\") pod \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.121238 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-config\") pod \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.121361 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-serving-cert\") pod \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.121395 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-client-ca\") pod \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\" (UID: \"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4\") " Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.121748 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.121770 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9487fcd1-54b9-46fa-8204-157a532b9df0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.121781 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.121812 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ffnf\" (UniqueName: \"kubernetes.io/projected/9487fcd1-54b9-46fa-8204-157a532b9df0-kube-api-access-7ffnf\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.121823 4814 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9487fcd1-54b9-46fa-8204-157a532b9df0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.122637 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-config" (OuterVolumeSpecName: "config") pod "20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4" (UID: "20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.124124 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-client-ca" (OuterVolumeSpecName: "client-ca") pod "20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4" (UID: "20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.124978 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-kube-api-access-sjr8l" (OuterVolumeSpecName: "kube-api-access-sjr8l") pod "20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4" (UID: "20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4"). InnerVolumeSpecName "kube-api-access-sjr8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.125680 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4" (UID: "20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.223450 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjr8l\" (UniqueName: \"kubernetes.io/projected/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-kube-api-access-sjr8l\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.223488 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.223498 4814 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.223510 4814 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.659218 4814 generic.go:334] "Generic (PLEG): container finished" podID="20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4" containerID="64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d" exitCode=0 Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.659316 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" event={"ID":"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4","Type":"ContainerDied","Data":"64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d"} Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.659369 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" event={"ID":"20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4","Type":"ContainerDied","Data":"b1e8057cd7f1901d3729d91769948b6411a192662fd3ffd7984851d35da78634"} Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.659398 4814 scope.go:117] "RemoveContainer" containerID="64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.659448 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.663335 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" event={"ID":"9487fcd1-54b9-46fa-8204-157a532b9df0","Type":"ContainerDied","Data":"af83c7456081e7eea2844385d0430871b306b87b3da468b00819296b634b88f5"} Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.663396 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-795ff79796-hcd8h" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.675982 4814 scope.go:117] "RemoveContainer" containerID="64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.676601 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d\": container with ID starting with 64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d not found: ID does not exist" containerID="64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.676641 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d"} err="failed to get container status \"64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d\": rpc error: code = NotFound desc = could not find container \"64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d\": container with ID starting with 64300815c4ae46e14a9afbe1b469c4bfbc2703f73b6007344d5df8ef503a9b0d not found: ID does not exist" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.676674 4814 scope.go:117] "RemoveContainer" containerID="29a2b8af0249d124f0bc7f4a89409dd4176acba6f0c202c02069bf01874315aa" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.707856 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-795ff79796-hcd8h"] Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.707935 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-795ff79796-hcd8h"] Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.723832 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm"] Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.728816 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bbc7bc859-6z2pm"] Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.940841 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-574c4dddcf-4g7v6"] Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941122 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" containerName="extract-utilities" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941141 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" containerName="extract-utilities" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941161 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941172 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941197 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerName="extract-utilities" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941209 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerName="extract-utilities" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941221 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerName="extract-content" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941231 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerName="extract-content" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941248 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9487fcd1-54b9-46fa-8204-157a532b9df0" containerName="controller-manager" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941259 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9487fcd1-54b9-46fa-8204-157a532b9df0" containerName="controller-manager" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941279 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerName="extract-content" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941289 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerName="extract-content" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941303 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" containerName="extract-content" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941315 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" containerName="extract-content" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941336 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" containerName="extract-utilities" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941346 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" containerName="extract-utilities" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941358 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerName="extract-utilities" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941366 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerName="extract-utilities" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941376 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941385 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941400 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941411 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941428 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941438 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941449 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" containerName="extract-content" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941457 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" containerName="extract-content" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941466 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" containerName="oauth-openshift" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941474 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" containerName="oauth-openshift" Feb 16 09:49:43 crc kubenswrapper[4814]: E0216 09:49:43.941487 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4" containerName="route-controller-manager" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941498 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4" containerName="route-controller-manager" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941700 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="231208dc-d685-4a03-935e-ac1f6c6f7bf4" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941726 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ecbcef-c7e9-4e4c-93b3-63d71c4c097c" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941744 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e44ee2-4f8c-44f7-9428-966356c68a90" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941757 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9487fcd1-54b9-46fa-8204-157a532b9df0" containerName="controller-manager" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941770 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4" containerName="route-controller-manager" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941786 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51c8b2c-1728-4385-a7a4-f55a2f7cc18a" containerName="oauth-openshift" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.941801 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1093e2eb-672e-4aae-8ee6-ffc390592ff8" containerName="registry-server" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.942427 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.946418 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.946942 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.947343 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.947943 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86f746897-pjd67"] Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.948100 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.949032 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.966186 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 09:49:43 crc kubenswrapper[4814]: I0216 09:49:43.969431 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.013329 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-574c4dddcf-4g7v6"] Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.013366 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.013669 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.014015 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.014276 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.014388 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.014499 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.017914 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86f746897-pjd67"] Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.018767 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.033910 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1165800c-5b80-43be-9264-383f3228dc73-config\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.033991 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwpql\" (UniqueName: \"kubernetes.io/projected/1165800c-5b80-43be-9264-383f3228dc73-kube-api-access-nwpql\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.034028 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6444464-82d8-47ac-80c9-69bef8139935-config\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.034064 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1165800c-5b80-43be-9264-383f3228dc73-client-ca\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.034093 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6444464-82d8-47ac-80c9-69bef8139935-client-ca\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.034117 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6444464-82d8-47ac-80c9-69bef8139935-serving-cert\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.034137 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6444464-82d8-47ac-80c9-69bef8139935-proxy-ca-bundles\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.034181 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwsn\" (UniqueName: \"kubernetes.io/projected/d6444464-82d8-47ac-80c9-69bef8139935-kube-api-access-lpwsn\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.034209 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1165800c-5b80-43be-9264-383f3228dc73-serving-cert\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.136115 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6444464-82d8-47ac-80c9-69bef8139935-serving-cert\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.136202 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6444464-82d8-47ac-80c9-69bef8139935-proxy-ca-bundles\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.136262 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpwsn\" (UniqueName: \"kubernetes.io/projected/d6444464-82d8-47ac-80c9-69bef8139935-kube-api-access-lpwsn\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.136289 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1165800c-5b80-43be-9264-383f3228dc73-serving-cert\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.136324 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1165800c-5b80-43be-9264-383f3228dc73-config\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.136373 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwpql\" (UniqueName: \"kubernetes.io/projected/1165800c-5b80-43be-9264-383f3228dc73-kube-api-access-nwpql\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.136403 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6444464-82d8-47ac-80c9-69bef8139935-config\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.136437 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1165800c-5b80-43be-9264-383f3228dc73-client-ca\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.136469 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6444464-82d8-47ac-80c9-69bef8139935-client-ca\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.138115 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d6444464-82d8-47ac-80c9-69bef8139935-proxy-ca-bundles\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.138392 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6444464-82d8-47ac-80c9-69bef8139935-client-ca\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.138616 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1165800c-5b80-43be-9264-383f3228dc73-client-ca\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.138997 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1165800c-5b80-43be-9264-383f3228dc73-config\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.139699 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6444464-82d8-47ac-80c9-69bef8139935-config\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.143599 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6444464-82d8-47ac-80c9-69bef8139935-serving-cert\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.144102 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1165800c-5b80-43be-9264-383f3228dc73-serving-cert\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.158567 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpwsn\" (UniqueName: \"kubernetes.io/projected/d6444464-82d8-47ac-80c9-69bef8139935-kube-api-access-lpwsn\") pod \"controller-manager-574c4dddcf-4g7v6\" (UID: \"d6444464-82d8-47ac-80c9-69bef8139935\") " pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.158659 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwpql\" (UniqueName: \"kubernetes.io/projected/1165800c-5b80-43be-9264-383f3228dc73-kube-api-access-nwpql\") pod \"route-controller-manager-86f746897-pjd67\" (UID: \"1165800c-5b80-43be-9264-383f3228dc73\") " pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.325151 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.329195 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.765142 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-574c4dddcf-4g7v6"] Feb 16 09:49:44 crc kubenswrapper[4814]: W0216 09:49:44.774475 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6444464_82d8_47ac_80c9_69bef8139935.slice/crio-596a991253b555cd572c156a5a739527c6fa8804c06541393a6c55f26994b12a WatchSource:0}: Error finding container 596a991253b555cd572c156a5a739527c6fa8804c06541393a6c55f26994b12a: Status 404 returned error can't find the container with id 596a991253b555cd572c156a5a739527c6fa8804c06541393a6c55f26994b12a Feb 16 09:49:44 crc kubenswrapper[4814]: I0216 09:49:44.912729 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86f746897-pjd67"] Feb 16 09:49:44 crc kubenswrapper[4814]: W0216 09:49:44.918596 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1165800c_5b80_43be_9264_383f3228dc73.slice/crio-d6181952be061b09790e939eec82acd92bcc11e9abb819a7cdbd9ff6fe72576a WatchSource:0}: Error finding container d6181952be061b09790e939eec82acd92bcc11e9abb819a7cdbd9ff6fe72576a: Status 404 returned error can't find the container with id d6181952be061b09790e939eec82acd92bcc11e9abb819a7cdbd9ff6fe72576a Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.000377 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4" path="/var/lib/kubelet/pods/20b3b5c4-e32a-4ec3-97fe-69d83a0ce5b4/volumes" Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.001357 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9487fcd1-54b9-46fa-8204-157a532b9df0" path="/var/lib/kubelet/pods/9487fcd1-54b9-46fa-8204-157a532b9df0/volumes" Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.682366 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" event={"ID":"d6444464-82d8-47ac-80c9-69bef8139935","Type":"ContainerStarted","Data":"ac457b25c549089ede09355de635b8b9c3e95807d2b47fe1379590db0b4ee889"} Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.683098 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.683120 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" event={"ID":"d6444464-82d8-47ac-80c9-69bef8139935","Type":"ContainerStarted","Data":"596a991253b555cd572c156a5a739527c6fa8804c06541393a6c55f26994b12a"} Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.683954 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" event={"ID":"1165800c-5b80-43be-9264-383f3228dc73","Type":"ContainerStarted","Data":"738c5d9ac545f4950486bd765306fbb6fa0f5956f3ee754e35e6e11444748347"} Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.684026 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" event={"ID":"1165800c-5b80-43be-9264-383f3228dc73","Type":"ContainerStarted","Data":"d6181952be061b09790e939eec82acd92bcc11e9abb819a7cdbd9ff6fe72576a"} Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.684182 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.689981 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.692464 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.706027 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-574c4dddcf-4g7v6" podStartSLOduration=3.706002721 podStartE2EDuration="3.706002721s" podCreationTimestamp="2026-02-16 09:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:49:45.705770404 +0000 UTC m=+243.398926594" watchObservedRunningTime="2026-02-16 09:49:45.706002721 +0000 UTC m=+243.399158901" Feb 16 09:49:45 crc kubenswrapper[4814]: I0216 09:49:45.730754 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" podStartSLOduration=3.730729661 podStartE2EDuration="3.730729661s" podCreationTimestamp="2026-02-16 09:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:49:45.729845396 +0000 UTC m=+243.423001576" watchObservedRunningTime="2026-02-16 09:49:45.730729661 +0000 UTC m=+243.423885841" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.943492 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-78574554d5-8s7lx"] Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.944311 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.948064 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.948276 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.949677 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.949880 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.950682 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.950875 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.951321 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.951509 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.951570 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.951637 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.951574 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.952927 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.963635 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.964894 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.970716 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-78574554d5-8s7lx"] Feb 16 09:49:48 crc kubenswrapper[4814]: I0216 09:49:48.970796 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006079 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-session\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006127 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzqvw\" (UniqueName: \"kubernetes.io/projected/85dae907-e6fa-4c83-811d-dee3c2d22212-kube-api-access-qzqvw\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006154 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-audit-policies\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006175 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006203 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-router-certs\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006319 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006460 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-template-error\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006527 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-cliconfig\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006604 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006712 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-serving-cert\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006761 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-template-login\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006809 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-service-ca\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006835 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85dae907-e6fa-4c83-811d-dee3c2d22212-audit-dir\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.006854 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.108058 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.108163 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-template-error\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.108217 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-cliconfig\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.108246 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.108356 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-serving-cert\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.108407 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-template-login\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.108440 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-service-ca\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.108468 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85dae907-e6fa-4c83-811d-dee3c2d22212-audit-dir\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.108494 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.108961 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85dae907-e6fa-4c83-811d-dee3c2d22212-audit-dir\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.111445 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.112167 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-cliconfig\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.109009 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-session\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.112431 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzqvw\" (UniqueName: \"kubernetes.io/projected/85dae907-e6fa-4c83-811d-dee3c2d22212-kube-api-access-qzqvw\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.112518 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-audit-policies\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.112594 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.112632 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-router-certs\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.112675 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-service-ca\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.113893 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/85dae907-e6fa-4c83-811d-dee3c2d22212-audit-policies\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.115347 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-session\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.120851 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-template-error\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.121029 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.121346 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-template-login\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.124875 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-router-certs\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.132021 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.133242 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.133988 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/85dae907-e6fa-4c83-811d-dee3c2d22212-v4-0-config-system-serving-cert\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.136806 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzqvw\" (UniqueName: \"kubernetes.io/projected/85dae907-e6fa-4c83-811d-dee3c2d22212-kube-api-access-qzqvw\") pod \"oauth-openshift-78574554d5-8s7lx\" (UID: \"85dae907-e6fa-4c83-811d-dee3c2d22212\") " pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.263978 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.546166 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-78574554d5-8s7lx"] Feb 16 09:49:49 crc kubenswrapper[4814]: I0216 09:49:49.706989 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" event={"ID":"85dae907-e6fa-4c83-811d-dee3c2d22212","Type":"ContainerStarted","Data":"6abe0f941dd73052cf0bf6b82c08e3135799280384d923d7a6b4f79942b7c29c"} Feb 16 09:49:50 crc kubenswrapper[4814]: I0216 09:49:50.719902 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" event={"ID":"85dae907-e6fa-4c83-811d-dee3c2d22212","Type":"ContainerStarted","Data":"671f6201a4c6aedf658cd4d2ba382405879f1b980a76add292dc733797bfa39e"} Feb 16 09:49:50 crc kubenswrapper[4814]: I0216 09:49:50.720381 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:50 crc kubenswrapper[4814]: I0216 09:49:50.729134 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" Feb 16 09:49:50 crc kubenswrapper[4814]: I0216 09:49:50.756832 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-78574554d5-8s7lx" podStartSLOduration=36.75679166 podStartE2EDuration="36.75679166s" podCreationTimestamp="2026-02-16 09:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:49:50.748844677 +0000 UTC m=+248.442000927" watchObservedRunningTime="2026-02-16 09:49:50.75679166 +0000 UTC m=+248.449947890" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.213615 4814 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.214714 4814 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.214964 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.215177 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367" gracePeriod=15 Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.215294 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a" gracePeriod=15 Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.215292 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a" gracePeriod=15 Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.215397 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7" gracePeriod=15 Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.215295 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f" gracePeriod=15 Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.216403 4814 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 09:49:51 crc kubenswrapper[4814]: E0216 09:49:51.216659 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.216686 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 09:49:51 crc kubenswrapper[4814]: E0216 09:49:51.216696 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.216702 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 09:49:51 crc kubenswrapper[4814]: E0216 09:49:51.216714 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.216720 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 09:49:51 crc kubenswrapper[4814]: E0216 09:49:51.218273 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218292 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 09:49:51 crc kubenswrapper[4814]: E0216 09:49:51.218307 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218313 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 09:49:51 crc kubenswrapper[4814]: E0216 09:49:51.218325 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218331 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 09:49:51 crc kubenswrapper[4814]: E0216 09:49:51.218343 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218349 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218491 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218501 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218514 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218523 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218548 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218557 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 09:49:51 crc kubenswrapper[4814]: E0216 09:49:51.218664 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218673 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.218781 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.243661 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.243744 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.243775 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.243911 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.244001 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: E0216 09:49:51.255133 4814 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346151 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346251 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346302 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346329 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346453 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346453 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346576 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346593 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346628 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346694 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346683 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346742 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.346744 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.449003 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.449129 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.449139 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.449187 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.449220 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.449320 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.557156 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:51 crc kubenswrapper[4814]: W0216 09:49:51.594348 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-86ce450ca3542ce9a3fbd898912c16794390318b59d067ecc8200a133df1159f WatchSource:0}: Error finding container 86ce450ca3542ce9a3fbd898912c16794390318b59d067ecc8200a133df1159f: Status 404 returned error can't find the container with id 86ce450ca3542ce9a3fbd898912c16794390318b59d067ecc8200a133df1159f Feb 16 09:49:51 crc kubenswrapper[4814]: E0216 09:49:51.597741 4814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894b13449bcc85f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 09:49:51.596972127 +0000 UTC m=+249.290128307,LastTimestamp:2026-02-16 09:49:51.596972127 +0000 UTC m=+249.290128307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.730783 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.732618 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.733570 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a" exitCode=0 Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.733613 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7" exitCode=0 Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.733625 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a" exitCode=0 Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.733637 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f" exitCode=2 Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.733734 4814 scope.go:117] "RemoveContainer" containerID="6fe78df8d4e1c72f860ceaf56dc5bf0f9b33274c509f3199af714ffa7827d602" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.735601 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"86ce450ca3542ce9a3fbd898912c16794390318b59d067ecc8200a133df1159f"} Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.737742 4814 generic.go:334] "Generic (PLEG): container finished" podID="f410ac3b-3f81-4ca4-8c09-70f312086d54" containerID="e118e0999623ce694677e816b1ab8532f148f24c0b43bb46c44dff8b3d97852f" exitCode=0 Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.737883 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f410ac3b-3f81-4ca4-8c09-70f312086d54","Type":"ContainerDied","Data":"e118e0999623ce694677e816b1ab8532f148f24c0b43bb46c44dff8b3d97852f"} Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.738829 4814 status_manager.go:851] "Failed to get status for pod" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:51 crc kubenswrapper[4814]: I0216 09:49:51.739273 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:52 crc kubenswrapper[4814]: E0216 09:49:52.032575 4814 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" volumeName="registry-storage" Feb 16 09:49:52 crc kubenswrapper[4814]: I0216 09:49:52.240639 4814 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 09:49:52 crc kubenswrapper[4814]: I0216 09:49:52.241122 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 09:49:52 crc kubenswrapper[4814]: I0216 09:49:52.753083 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 09:49:52 crc kubenswrapper[4814]: I0216 09:49:52.758853 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a1ee3f865d9671132af72cf484e0ce0f652a3ecffcefba69befa6887b89b40b3"} Feb 16 09:49:52 crc kubenswrapper[4814]: E0216 09:49:52.760009 4814 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:52 crc kubenswrapper[4814]: I0216 09:49:52.760064 4814 status_manager.go:851] "Failed to get status for pod" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:52 crc kubenswrapper[4814]: I0216 09:49:52.760762 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:52 crc kubenswrapper[4814]: I0216 09:49:52.996955 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:52 crc kubenswrapper[4814]: I0216 09:49:52.997455 4814 status_manager.go:851] "Failed to get status for pod" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.046018 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.046822 4814 status_manager.go:851] "Failed to get status for pod" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.075361 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-var-lock\") pod \"f410ac3b-3f81-4ca4-8c09-70f312086d54\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.075491 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f410ac3b-3f81-4ca4-8c09-70f312086d54-kube-api-access\") pod \"f410ac3b-3f81-4ca4-8c09-70f312086d54\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.075486 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-var-lock" (OuterVolumeSpecName: "var-lock") pod "f410ac3b-3f81-4ca4-8c09-70f312086d54" (UID: "f410ac3b-3f81-4ca4-8c09-70f312086d54"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.075655 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-kubelet-dir\") pod \"f410ac3b-3f81-4ca4-8c09-70f312086d54\" (UID: \"f410ac3b-3f81-4ca4-8c09-70f312086d54\") " Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.075688 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f410ac3b-3f81-4ca4-8c09-70f312086d54" (UID: "f410ac3b-3f81-4ca4-8c09-70f312086d54"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.076687 4814 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.076764 4814 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f410ac3b-3f81-4ca4-8c09-70f312086d54-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.082816 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f410ac3b-3f81-4ca4-8c09-70f312086d54-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f410ac3b-3f81-4ca4-8c09-70f312086d54" (UID: "f410ac3b-3f81-4ca4-8c09-70f312086d54"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.177778 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f410ac3b-3f81-4ca4-8c09-70f312086d54-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.585559 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.586590 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.587822 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.588285 4814 status_manager.go:851] "Failed to get status for pod" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.686418 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.686587 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.686605 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.686619 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.686683 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.686743 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.686850 4814 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.686866 4814 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.686875 4814 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.767951 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f410ac3b-3f81-4ca4-8c09-70f312086d54","Type":"ContainerDied","Data":"0b8d03812af70bfccda0ae4e8b16c5c0d06baff9e6e5300f465872af78f0dc32"} Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.768414 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b8d03812af70bfccda0ae4e8b16c5c0d06baff9e6e5300f465872af78f0dc32" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.767992 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.773244 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.774502 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367" exitCode=0 Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.774641 4814 scope.go:117] "RemoveContainer" containerID="681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.774718 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:49:53 crc kubenswrapper[4814]: E0216 09:49:53.775724 4814 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.793054 4814 status_manager.go:851] "Failed to get status for pod" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.793694 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.800438 4814 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.801207 4814 status_manager.go:851] "Failed to get status for pod" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.801968 4814 scope.go:117] "RemoveContainer" containerID="158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.817954 4814 scope.go:117] "RemoveContainer" containerID="fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.839626 4814 scope.go:117] "RemoveContainer" containerID="22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.861414 4814 scope.go:117] "RemoveContainer" containerID="bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.883947 4814 scope.go:117] "RemoveContainer" containerID="b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.905010 4814 scope.go:117] "RemoveContainer" containerID="681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a" Feb 16 09:49:53 crc kubenswrapper[4814]: E0216 09:49:53.905709 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\": container with ID starting with 681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a not found: ID does not exist" containerID="681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.905784 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a"} err="failed to get container status \"681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\": rpc error: code = NotFound desc = could not find container \"681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a\": container with ID starting with 681d5bbe75891fadee97dc9c3d8e08b47efb63bc38b788322d98b43d3e2f039a not found: ID does not exist" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.905824 4814 scope.go:117] "RemoveContainer" containerID="158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7" Feb 16 09:49:53 crc kubenswrapper[4814]: E0216 09:49:53.906361 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\": container with ID starting with 158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7 not found: ID does not exist" containerID="158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.906416 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7"} err="failed to get container status \"158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\": rpc error: code = NotFound desc = could not find container \"158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7\": container with ID starting with 158bd8d803cad03915a68590f2897213d9157ebe28a075af0c623887023e01d7 not found: ID does not exist" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.906452 4814 scope.go:117] "RemoveContainer" containerID="fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a" Feb 16 09:49:53 crc kubenswrapper[4814]: E0216 09:49:53.907006 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\": container with ID starting with fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a not found: ID does not exist" containerID="fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.907076 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a"} err="failed to get container status \"fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\": rpc error: code = NotFound desc = could not find container \"fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a\": container with ID starting with fcf578923974946fd982589e3d4025903cc3a3065f421203dc12109a13a38b9a not found: ID does not exist" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.907092 4814 scope.go:117] "RemoveContainer" containerID="22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f" Feb 16 09:49:53 crc kubenswrapper[4814]: E0216 09:49:53.907513 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\": container with ID starting with 22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f not found: ID does not exist" containerID="22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.907582 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f"} err="failed to get container status \"22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\": rpc error: code = NotFound desc = could not find container \"22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f\": container with ID starting with 22c2371b920564f90d0ba6501fd8079d96f3e3234baa9fec64cf70798525b36f not found: ID does not exist" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.907619 4814 scope.go:117] "RemoveContainer" containerID="bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367" Feb 16 09:49:53 crc kubenswrapper[4814]: E0216 09:49:53.908007 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\": container with ID starting with bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367 not found: ID does not exist" containerID="bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.908038 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367"} err="failed to get container status \"bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\": rpc error: code = NotFound desc = could not find container \"bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367\": container with ID starting with bc2d79d9edbc73be9de35c23a529fb74e52bca1fcd6ed9bfaf59545c7f0b0367 not found: ID does not exist" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.908053 4814 scope.go:117] "RemoveContainer" containerID="b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227" Feb 16 09:49:53 crc kubenswrapper[4814]: E0216 09:49:53.908304 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\": container with ID starting with b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227 not found: ID does not exist" containerID="b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227" Feb 16 09:49:53 crc kubenswrapper[4814]: I0216 09:49:53.908329 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227"} err="failed to get container status \"b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\": rpc error: code = NotFound desc = could not find container \"b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227\": container with ID starting with b400a08bb52f3a9566574799ed9193e20e0f66ba716359f264997e8b1e517227 not found: ID does not exist" Feb 16 09:49:54 crc kubenswrapper[4814]: E0216 09:49:54.248617 4814 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:54 crc kubenswrapper[4814]: E0216 09:49:54.249093 4814 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:54 crc kubenswrapper[4814]: E0216 09:49:54.249844 4814 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:54 crc kubenswrapper[4814]: E0216 09:49:54.250219 4814 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:54 crc kubenswrapper[4814]: E0216 09:49:54.250502 4814 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:49:54 crc kubenswrapper[4814]: I0216 09:49:54.250564 4814 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 09:49:54 crc kubenswrapper[4814]: E0216 09:49:54.250965 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="200ms" Feb 16 09:49:54 crc kubenswrapper[4814]: E0216 09:49:54.451526 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="400ms" Feb 16 09:49:54 crc kubenswrapper[4814]: E0216 09:49:54.852191 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="800ms" Feb 16 09:49:55 crc kubenswrapper[4814]: I0216 09:49:55.000977 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 09:49:55 crc kubenswrapper[4814]: E0216 09:49:55.653710 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="1.6s" Feb 16 09:49:57 crc kubenswrapper[4814]: E0216 09:49:57.255520 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="3.2s" Feb 16 09:49:59 crc kubenswrapper[4814]: E0216 09:49:59.503104 4814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894b13449bcc85f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 09:49:51.596972127 +0000 UTC m=+249.290128307,LastTimestamp:2026-02-16 09:49:51.596972127 +0000 UTC m=+249.290128307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 09:50:00 crc kubenswrapper[4814]: E0216 09:50:00.457357 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="6.4s" Feb 16 09:50:02 crc kubenswrapper[4814]: I0216 09:50:02.996949 4814 status_manager.go:851] "Failed to get status for pod" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:50:04 crc kubenswrapper[4814]: I0216 09:50:04.993506 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:50:04 crc kubenswrapper[4814]: I0216 09:50:04.994720 4814 status_manager.go:851] "Failed to get status for pod" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:50:05 crc kubenswrapper[4814]: I0216 09:50:05.013516 4814 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2253174d-f4ae-4b6a-bfdb-10b821ba8fbe" Feb 16 09:50:05 crc kubenswrapper[4814]: I0216 09:50:05.013584 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2253174d-f4ae-4b6a-bfdb-10b821ba8fbe" Feb 16 09:50:05 crc kubenswrapper[4814]: E0216 09:50:05.014287 4814 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:50:05 crc kubenswrapper[4814]: I0216 09:50:05.015110 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:50:05 crc kubenswrapper[4814]: W0216 09:50:05.043952 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-bcf799e16441a8653cf0860cc2d2df941fdfc47ee22bc9586292991cce6ab754 WatchSource:0}: Error finding container bcf799e16441a8653cf0860cc2d2df941fdfc47ee22bc9586292991cce6ab754: Status 404 returned error can't find the container with id bcf799e16441a8653cf0860cc2d2df941fdfc47ee22bc9586292991cce6ab754 Feb 16 09:50:05 crc kubenswrapper[4814]: I0216 09:50:05.869376 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bcf799e16441a8653cf0860cc2d2df941fdfc47ee22bc9586292991cce6ab754"} Feb 16 09:50:06 crc kubenswrapper[4814]: I0216 09:50:06.038918 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 09:50:06 crc kubenswrapper[4814]: I0216 09:50:06.039006 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 09:50:06 crc kubenswrapper[4814]: E0216 09:50:06.859037 4814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.73:6443: connect: connection refused" interval="7s" Feb 16 09:50:06 crc kubenswrapper[4814]: I0216 09:50:06.881461 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 09:50:06 crc kubenswrapper[4814]: I0216 09:50:06.881623 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 09:50:09 crc kubenswrapper[4814]: E0216 09:50:09.505412 4814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.73:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894b13449bcc85f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 09:49:51.596972127 +0000 UTC m=+249.290128307,LastTimestamp:2026-02-16 09:49:51.596972127 +0000 UTC m=+249.290128307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 09:50:09 crc kubenswrapper[4814]: I0216 09:50:09.907247 4814 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8dd608bf92ae56165434fc9737d85fbf4fcdae579e4c65544e1d7a24adcbb2b0" exitCode=0 Feb 16 09:50:09 crc kubenswrapper[4814]: I0216 09:50:09.907335 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8dd608bf92ae56165434fc9737d85fbf4fcdae579e4c65544e1d7a24adcbb2b0"} Feb 16 09:50:09 crc kubenswrapper[4814]: I0216 09:50:09.907823 4814 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2253174d-f4ae-4b6a-bfdb-10b821ba8fbe" Feb 16 09:50:09 crc kubenswrapper[4814]: I0216 09:50:09.907876 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2253174d-f4ae-4b6a-bfdb-10b821ba8fbe" Feb 16 09:50:09 crc kubenswrapper[4814]: I0216 09:50:09.908130 4814 status_manager.go:851] "Failed to get status for pod" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:50:09 crc kubenswrapper[4814]: E0216 09:50:09.908663 4814 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:50:09 crc kubenswrapper[4814]: I0216 09:50:09.910886 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 09:50:09 crc kubenswrapper[4814]: I0216 09:50:09.910957 4814 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6" exitCode=1 Feb 16 09:50:09 crc kubenswrapper[4814]: I0216 09:50:09.910998 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6"} Feb 16 09:50:09 crc kubenswrapper[4814]: I0216 09:50:09.911786 4814 status_manager.go:851] "Failed to get status for pod" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:50:09 crc kubenswrapper[4814]: I0216 09:50:09.911912 4814 scope.go:117] "RemoveContainer" containerID="8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6" Feb 16 09:50:09 crc kubenswrapper[4814]: I0216 09:50:09.912475 4814 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.73:6443: connect: connection refused" Feb 16 09:50:10 crc kubenswrapper[4814]: I0216 09:50:10.924482 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 09:50:10 crc kubenswrapper[4814]: I0216 09:50:10.924921 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55db2a7205405946d0d116763e9c905e4d73d4f6a50b517f1c054748ed8389ea"} Feb 16 09:50:10 crc kubenswrapper[4814]: I0216 09:50:10.928399 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3e169a1dbb4ca1f418a946d921f6d72a8e9ac412b70f169438aa024712170b54"} Feb 16 09:50:10 crc kubenswrapper[4814]: I0216 09:50:10.928435 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f60f8fc45dd737053d0ef914c77887c9cf44aaaaa5f8da391287f724c7a1fb10"} Feb 16 09:50:10 crc kubenswrapper[4814]: I0216 09:50:10.928452 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c5c74621df84041ec11c424ee652b8d2c35b9b0b3df8254ea6f69d05f34e7329"} Feb 16 09:50:10 crc kubenswrapper[4814]: I0216 09:50:10.928468 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d3d8b52e71ab117920670f96ffed43c548c8432b973163f5cb8712e51da33afa"} Feb 16 09:50:11 crc kubenswrapper[4814]: I0216 09:50:11.937427 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fa5d249b4d87a7b546323007f2e59b586c6a4c74b21f33a74bde4ed3c8b42df4"} Feb 16 09:50:11 crc kubenswrapper[4814]: I0216 09:50:11.938031 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:50:11 crc kubenswrapper[4814]: I0216 09:50:11.937944 4814 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2253174d-f4ae-4b6a-bfdb-10b821ba8fbe" Feb 16 09:50:11 crc kubenswrapper[4814]: I0216 09:50:11.938062 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2253174d-f4ae-4b6a-bfdb-10b821ba8fbe" Feb 16 09:50:15 crc kubenswrapper[4814]: I0216 09:50:15.015327 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:50:15 crc kubenswrapper[4814]: I0216 09:50:15.015629 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:50:15 crc kubenswrapper[4814]: I0216 09:50:15.021977 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:50:16 crc kubenswrapper[4814]: I0216 09:50:16.879205 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:50:16 crc kubenswrapper[4814]: I0216 09:50:16.974795 4814 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:50:17 crc kubenswrapper[4814]: I0216 09:50:17.069398 4814 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="169dcc24-c86b-4ce0-9dae-3c1f7de2b178" Feb 16 09:50:17 crc kubenswrapper[4814]: I0216 09:50:17.086977 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:50:17 crc kubenswrapper[4814]: I0216 09:50:17.087191 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 09:50:17 crc kubenswrapper[4814]: I0216 09:50:17.087256 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 09:50:17 crc kubenswrapper[4814]: I0216 09:50:17.975224 4814 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2253174d-f4ae-4b6a-bfdb-10b821ba8fbe" Feb 16 09:50:17 crc kubenswrapper[4814]: I0216 09:50:17.975267 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2253174d-f4ae-4b6a-bfdb-10b821ba8fbe" Feb 16 09:50:17 crc kubenswrapper[4814]: I0216 09:50:17.985926 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:50:17 crc kubenswrapper[4814]: I0216 09:50:17.989185 4814 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="169dcc24-c86b-4ce0-9dae-3c1f7de2b178" Feb 16 09:50:18 crc kubenswrapper[4814]: I0216 09:50:18.984154 4814 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2253174d-f4ae-4b6a-bfdb-10b821ba8fbe" Feb 16 09:50:18 crc kubenswrapper[4814]: I0216 09:50:18.984218 4814 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2253174d-f4ae-4b6a-bfdb-10b821ba8fbe" Feb 16 09:50:18 crc kubenswrapper[4814]: I0216 09:50:18.988889 4814 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="169dcc24-c86b-4ce0-9dae-3c1f7de2b178" Feb 16 09:50:26 crc kubenswrapper[4814]: I0216 09:50:26.896841 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 09:50:27 crc kubenswrapper[4814]: I0216 09:50:27.087507 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 09:50:27 crc kubenswrapper[4814]: I0216 09:50:27.087611 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 09:50:27 crc kubenswrapper[4814]: I0216 09:50:27.392269 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 09:50:27 crc kubenswrapper[4814]: I0216 09:50:27.564515 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 09:50:27 crc kubenswrapper[4814]: I0216 09:50:27.907483 4814 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 09:50:29 crc kubenswrapper[4814]: I0216 09:50:29.141627 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 09:50:29 crc kubenswrapper[4814]: I0216 09:50:29.572581 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 09:50:29 crc kubenswrapper[4814]: I0216 09:50:29.615304 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 09:50:29 crc kubenswrapper[4814]: I0216 09:50:29.850786 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.054816 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.101134 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.298093 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.298316 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.427247 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.492564 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.494890 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.550870 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.673040 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.788101 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.932767 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.972085 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 09:50:30 crc kubenswrapper[4814]: I0216 09:50:30.978167 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.066161 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.068192 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.091302 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.135970 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.253979 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.294659 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.326711 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.375346 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.457804 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.494442 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.496067 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.513007 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.536602 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.619807 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.722875 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.803586 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 09:50:31 crc kubenswrapper[4814]: I0216 09:50:31.837673 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.046142 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.070296 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.084911 4814 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.261648 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.279587 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.303426 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.503822 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.646933 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.775922 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.841274 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.862192 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 09:50:32 crc kubenswrapper[4814]: I0216 09:50:32.991141 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.017751 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.020678 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.063031 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.069736 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.098513 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.151908 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.269832 4814 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.308689 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.593787 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.603403 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.617266 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.666867 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.780606 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.814097 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.864371 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.973493 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.979680 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 09:50:33 crc kubenswrapper[4814]: I0216 09:50:33.999487 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 09:50:34 crc kubenswrapper[4814]: I0216 09:50:34.250407 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 09:50:34 crc kubenswrapper[4814]: I0216 09:50:34.400340 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 09:50:34 crc kubenswrapper[4814]: I0216 09:50:34.435410 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 09:50:34 crc kubenswrapper[4814]: I0216 09:50:34.650938 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 09:50:34 crc kubenswrapper[4814]: I0216 09:50:34.693576 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 09:50:34 crc kubenswrapper[4814]: I0216 09:50:34.753177 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 09:50:34 crc kubenswrapper[4814]: I0216 09:50:34.889143 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 09:50:34 crc kubenswrapper[4814]: I0216 09:50:34.893179 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.010029 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.124999 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.189813 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.367691 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.415098 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.770962 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.786708 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.810306 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.821418 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.839765 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.851235 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.893037 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 09:50:35 crc kubenswrapper[4814]: I0216 09:50:35.929285 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.190383 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.198761 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.228892 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.355697 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.450256 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.497121 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.624461 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.656949 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.703011 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.739106 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.797616 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.808624 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.884281 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 09:50:36 crc kubenswrapper[4814]: I0216 09:50:36.927290 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.028724 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.087588 4814 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.087681 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.087782 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.088663 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"55db2a7205405946d0d116763e9c905e4d73d4f6a50b517f1c054748ed8389ea"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.088789 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://55db2a7205405946d0d116763e9c905e4d73d4f6a50b517f1c054748ed8389ea" gracePeriod=30 Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.141850 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.176070 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.196664 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.240363 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.310583 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.310857 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.359493 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.389514 4814 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.767728 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.816007 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 09:50:37 crc kubenswrapper[4814]: I0216 09:50:37.821712 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.071045 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.109219 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.132526 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.257805 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.370348 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.414603 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.417365 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.418061 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.469811 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.541819 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.598772 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.609497 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.626024 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.791756 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.843082 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.931695 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 09:50:38 crc kubenswrapper[4814]: I0216 09:50:38.991579 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 09:50:39 crc kubenswrapper[4814]: I0216 09:50:39.034041 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 09:50:39 crc kubenswrapper[4814]: I0216 09:50:39.057011 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 09:50:39 crc kubenswrapper[4814]: I0216 09:50:39.067860 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 09:50:39 crc kubenswrapper[4814]: I0216 09:50:39.147138 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 09:50:39 crc kubenswrapper[4814]: I0216 09:50:39.328995 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 09:50:39 crc kubenswrapper[4814]: I0216 09:50:39.341662 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 09:50:39 crc kubenswrapper[4814]: I0216 09:50:39.508007 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 09:50:39 crc kubenswrapper[4814]: I0216 09:50:39.584750 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 09:50:39 crc kubenswrapper[4814]: I0216 09:50:39.808976 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 09:50:39 crc kubenswrapper[4814]: I0216 09:50:39.818920 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.019666 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.143739 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.193250 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.199199 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.216955 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.322310 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.392164 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.474873 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.640622 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.694365 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.698301 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.816369 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 09:50:40 crc kubenswrapper[4814]: I0216 09:50:40.825615 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.086772 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.095745 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.107781 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.116023 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.152838 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.311796 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.333230 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.335048 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.401025 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.469099 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.785003 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.946023 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.960476 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 09:50:41 crc kubenswrapper[4814]: I0216 09:50:41.961965 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 09:50:42 crc kubenswrapper[4814]: I0216 09:50:42.054152 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 09:50:42 crc kubenswrapper[4814]: I0216 09:50:42.466832 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 09:50:42 crc kubenswrapper[4814]: I0216 09:50:42.522819 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 09:50:42 crc kubenswrapper[4814]: I0216 09:50:42.666923 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 09:50:42 crc kubenswrapper[4814]: I0216 09:50:42.741816 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 09:50:42 crc kubenswrapper[4814]: I0216 09:50:42.781334 4814 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 09:50:48 crc kubenswrapper[4814]: I0216 09:50:48.487679 4814 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 09:50:48 crc kubenswrapper[4814]: I0216 09:50:48.498612 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 09:50:48 crc kubenswrapper[4814]: I0216 09:50:48.498758 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 09:50:48 crc kubenswrapper[4814]: I0216 09:50:48.538004 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=32.537972104 podStartE2EDuration="32.537972104s" podCreationTimestamp="2026-02-16 09:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:50:48.534274555 +0000 UTC m=+306.227430835" watchObservedRunningTime="2026-02-16 09:50:48.537972104 +0000 UTC m=+306.231128314" Feb 16 09:50:50 crc kubenswrapper[4814]: I0216 09:50:50.300214 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 09:50:51 crc kubenswrapper[4814]: I0216 09:50:51.007810 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 09:50:53 crc kubenswrapper[4814]: I0216 09:50:53.215778 4814 generic.go:334] "Generic (PLEG): container finished" podID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerID="67c558268fc495fa900056dc45d922297be9c4534d24c684b73eb7ff6ae821cd" exitCode=0 Feb 16 09:50:53 crc kubenswrapper[4814]: I0216 09:50:53.215931 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" event={"ID":"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee","Type":"ContainerDied","Data":"67c558268fc495fa900056dc45d922297be9c4534d24c684b73eb7ff6ae821cd"} Feb 16 09:50:53 crc kubenswrapper[4814]: I0216 09:50:53.218603 4814 scope.go:117] "RemoveContainer" containerID="67c558268fc495fa900056dc45d922297be9c4534d24c684b73eb7ff6ae821cd" Feb 16 09:50:53 crc kubenswrapper[4814]: I0216 09:50:53.226301 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 09:50:53 crc kubenswrapper[4814]: I0216 09:50:53.264672 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 09:50:53 crc kubenswrapper[4814]: I0216 09:50:53.813274 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 09:50:54 crc kubenswrapper[4814]: I0216 09:50:54.225112 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cr82j_13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee/marketplace-operator/1.log" Feb 16 09:50:54 crc kubenswrapper[4814]: I0216 09:50:54.226815 4814 generic.go:334] "Generic (PLEG): container finished" podID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerID="8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b" exitCode=1 Feb 16 09:50:54 crc kubenswrapper[4814]: I0216 09:50:54.226870 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" event={"ID":"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee","Type":"ContainerDied","Data":"8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b"} Feb 16 09:50:54 crc kubenswrapper[4814]: I0216 09:50:54.226978 4814 scope.go:117] "RemoveContainer" containerID="67c558268fc495fa900056dc45d922297be9c4534d24c684b73eb7ff6ae821cd" Feb 16 09:50:54 crc kubenswrapper[4814]: I0216 09:50:54.227579 4814 scope.go:117] "RemoveContainer" containerID="8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b" Feb 16 09:50:54 crc kubenswrapper[4814]: E0216 09:50:54.227817 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-cr82j_openshift-marketplace(13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee)\"" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" Feb 16 09:50:54 crc kubenswrapper[4814]: I0216 09:50:54.575470 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 09:50:54 crc kubenswrapper[4814]: I0216 09:50:54.692772 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 09:50:55 crc kubenswrapper[4814]: I0216 09:50:55.020240 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 09:50:55 crc kubenswrapper[4814]: I0216 09:50:55.193302 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 09:50:55 crc kubenswrapper[4814]: I0216 09:50:55.234782 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cr82j_13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee/marketplace-operator/1.log" Feb 16 09:50:55 crc kubenswrapper[4814]: I0216 09:50:55.455376 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 09:50:55 crc kubenswrapper[4814]: I0216 09:50:55.777316 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 09:50:55 crc kubenswrapper[4814]: I0216 09:50:55.809095 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:50:55 crc kubenswrapper[4814]: I0216 09:50:55.809234 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:50:55 crc kubenswrapper[4814]: I0216 09:50:55.810418 4814 scope.go:117] "RemoveContainer" containerID="8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b" Feb 16 09:50:55 crc kubenswrapper[4814]: E0216 09:50:55.810832 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-cr82j_openshift-marketplace(13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee)\"" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" Feb 16 09:50:56 crc kubenswrapper[4814]: I0216 09:50:56.003641 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 09:50:56 crc kubenswrapper[4814]: I0216 09:50:56.780023 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 09:50:57 crc kubenswrapper[4814]: I0216 09:50:57.331217 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 09:50:57 crc kubenswrapper[4814]: I0216 09:50:57.341450 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 09:50:57 crc kubenswrapper[4814]: I0216 09:50:57.674175 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 09:50:57 crc kubenswrapper[4814]: I0216 09:50:57.875752 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 09:50:58 crc kubenswrapper[4814]: I0216 09:50:58.037942 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 09:50:58 crc kubenswrapper[4814]: I0216 09:50:58.559350 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 09:50:58 crc kubenswrapper[4814]: I0216 09:50:58.684019 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 09:50:58 crc kubenswrapper[4814]: I0216 09:50:58.822025 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 09:50:59 crc kubenswrapper[4814]: I0216 09:50:59.167169 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 09:50:59 crc kubenswrapper[4814]: I0216 09:50:59.461279 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 09:50:59 crc kubenswrapper[4814]: I0216 09:50:59.626062 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 09:50:59 crc kubenswrapper[4814]: I0216 09:50:59.732792 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 09:51:00 crc kubenswrapper[4814]: I0216 09:51:00.023122 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 09:51:00 crc kubenswrapper[4814]: I0216 09:51:00.119150 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 09:51:00 crc kubenswrapper[4814]: I0216 09:51:00.468982 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 09:51:00 crc kubenswrapper[4814]: I0216 09:51:00.782894 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 09:51:00 crc kubenswrapper[4814]: I0216 09:51:00.838783 4814 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 09:51:01 crc kubenswrapper[4814]: I0216 09:51:01.154165 4814 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 09:51:01 crc kubenswrapper[4814]: I0216 09:51:01.154517 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a1ee3f865d9671132af72cf484e0ce0f652a3ecffcefba69befa6887b89b40b3" gracePeriod=5 Feb 16 09:51:01 crc kubenswrapper[4814]: I0216 09:51:01.352680 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 09:51:01 crc kubenswrapper[4814]: I0216 09:51:01.542914 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 09:51:02 crc kubenswrapper[4814]: I0216 09:51:02.053694 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 09:51:02 crc kubenswrapper[4814]: I0216 09:51:02.673324 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 09:51:03 crc kubenswrapper[4814]: I0216 09:51:03.160999 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 09:51:04 crc kubenswrapper[4814]: I0216 09:51:04.550308 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 09:51:04 crc kubenswrapper[4814]: I0216 09:51:04.613792 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 09:51:04 crc kubenswrapper[4814]: I0216 09:51:04.671877 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 09:51:04 crc kubenswrapper[4814]: I0216 09:51:04.952329 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 09:51:05 crc kubenswrapper[4814]: I0216 09:51:05.327950 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 09:51:05 crc kubenswrapper[4814]: I0216 09:51:05.948807 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.092127 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.329780 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.329864 4814 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a1ee3f865d9671132af72cf484e0ce0f652a3ecffcefba69befa6887b89b40b3" exitCode=137 Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.400981 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.734404 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.783458 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.783579 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.793340 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.793527 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.793510 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.793576 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.793651 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.793855 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.793960 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.793910 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.794037 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.794659 4814 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.794685 4814 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.794707 4814 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.794724 4814 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.818208 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:51:06 crc kubenswrapper[4814]: I0216 09:51:06.895846 4814 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.025636 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.341019 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.341237 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.341832 4814 scope.go:117] "RemoveContainer" containerID="a1ee3f865d9671132af72cf484e0ce0f652a3ecffcefba69befa6887b89b40b3" Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.350217 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.354230 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.354315 4814 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="55db2a7205405946d0d116763e9c905e4d73d4f6a50b517f1c054748ed8389ea" exitCode=137 Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.354377 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"55db2a7205405946d0d116763e9c905e4d73d4f6a50b517f1c054748ed8389ea"} Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.363690 4814 scope.go:117] "RemoveContainer" containerID="8e0b2ee15927bf88045aff0d75a48692f057f5540a595adc0b17d2eea726e1c6" Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.666944 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.847718 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 09:51:07 crc kubenswrapper[4814]: I0216 09:51:07.926429 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 09:51:08 crc kubenswrapper[4814]: I0216 09:51:08.155042 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 09:51:08 crc kubenswrapper[4814]: I0216 09:51:08.248676 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 09:51:08 crc kubenswrapper[4814]: I0216 09:51:08.364858 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 16 09:51:08 crc kubenswrapper[4814]: I0216 09:51:08.366890 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"abb6010f3f3f349b7498e5bd0ff845b2d9f54e43665db0312232018c4c46edfd"} Feb 16 09:51:09 crc kubenswrapper[4814]: I0216 09:51:09.443253 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 09:51:09 crc kubenswrapper[4814]: I0216 09:51:09.645305 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 09:51:09 crc kubenswrapper[4814]: I0216 09:51:09.744124 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 09:51:10 crc kubenswrapper[4814]: I0216 09:51:10.007813 4814 scope.go:117] "RemoveContainer" containerID="8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b" Feb 16 09:51:10 crc kubenswrapper[4814]: I0216 09:51:10.016806 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 09:51:10 crc kubenswrapper[4814]: I0216 09:51:10.036360 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 09:51:10 crc kubenswrapper[4814]: I0216 09:51:10.384824 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cr82j_13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee/marketplace-operator/1.log" Feb 16 09:51:10 crc kubenswrapper[4814]: I0216 09:51:10.385291 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" event={"ID":"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee","Type":"ContainerStarted","Data":"ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01"} Feb 16 09:51:10 crc kubenswrapper[4814]: I0216 09:51:10.385732 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:51:10 crc kubenswrapper[4814]: I0216 09:51:10.387906 4814 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cr82j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 16 09:51:10 crc kubenswrapper[4814]: I0216 09:51:10.387978 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.23:8080/healthz\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 16 09:51:10 crc kubenswrapper[4814]: I0216 09:51:10.724468 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 09:51:11 crc kubenswrapper[4814]: I0216 09:51:11.391686 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 09:51:11 crc kubenswrapper[4814]: I0216 09:51:11.396775 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:51:11 crc kubenswrapper[4814]: I0216 09:51:11.482061 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 09:51:11 crc kubenswrapper[4814]: I0216 09:51:11.758430 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 09:51:12 crc kubenswrapper[4814]: I0216 09:51:12.053601 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 09:51:12 crc kubenswrapper[4814]: I0216 09:51:12.206496 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 09:51:12 crc kubenswrapper[4814]: I0216 09:51:12.211735 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 09:51:12 crc kubenswrapper[4814]: I0216 09:51:12.541031 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 09:51:12 crc kubenswrapper[4814]: I0216 09:51:12.576375 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 09:51:13 crc kubenswrapper[4814]: I0216 09:51:13.082652 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 09:51:13 crc kubenswrapper[4814]: I0216 09:51:13.160771 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 09:51:13 crc kubenswrapper[4814]: I0216 09:51:13.719664 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 09:51:14 crc kubenswrapper[4814]: I0216 09:51:14.697862 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 09:51:14 crc kubenswrapper[4814]: I0216 09:51:14.776345 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 09:51:15 crc kubenswrapper[4814]: I0216 09:51:15.689467 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 09:51:16 crc kubenswrapper[4814]: I0216 09:51:16.402436 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 09:51:16 crc kubenswrapper[4814]: I0216 09:51:16.484014 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 09:51:16 crc kubenswrapper[4814]: I0216 09:51:16.690755 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 09:51:16 crc kubenswrapper[4814]: I0216 09:51:16.879082 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:51:17 crc kubenswrapper[4814]: I0216 09:51:17.088255 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:51:17 crc kubenswrapper[4814]: I0216 09:51:17.093738 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:51:17 crc kubenswrapper[4814]: I0216 09:51:17.430900 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 09:51:18 crc kubenswrapper[4814]: I0216 09:51:18.691329 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 09:51:18 crc kubenswrapper[4814]: I0216 09:51:18.768590 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 09:51:20 crc kubenswrapper[4814]: I0216 09:51:20.392309 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 09:51:21 crc kubenswrapper[4814]: I0216 09:51:21.054526 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 09:51:21 crc kubenswrapper[4814]: I0216 09:51:21.647366 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 09:51:22 crc kubenswrapper[4814]: I0216 09:51:22.365478 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 09:51:37 crc kubenswrapper[4814]: I0216 09:51:37.960781 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:51:37 crc kubenswrapper[4814]: I0216 09:51:37.961068 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:52:07 crc kubenswrapper[4814]: I0216 09:52:07.960228 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:52:07 crc kubenswrapper[4814]: I0216 09:52:07.961272 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.539597 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dcts9"] Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.540428 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dcts9" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerName="registry-server" containerID="cri-o://29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a" gracePeriod=30 Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.555028 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngqwc"] Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.555581 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ngqwc" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerName="registry-server" containerID="cri-o://38327da15d5e694ed57a377d1771379a33d904acd6627e8acd4bb22ba2c41bd3" gracePeriod=30 Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.565902 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cr82j"] Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.566273 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" containerID="cri-o://ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01" gracePeriod=30 Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.578406 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcv4r"] Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.580806 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jcv4r" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerName="registry-server" containerID="cri-o://bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb" gracePeriod=30 Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.591373 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kz6kn"] Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.591660 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kz6kn" podUID="afb2178d-394e-4d6b-baf0-8242e537aa1e" containerName="registry-server" containerID="cri-o://acb6c8969b84de9afd1228ed85cc5a06a80013bd0b159352fd749b9fc82106b5" gracePeriod=30 Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.610249 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qrxcn"] Feb 16 09:52:33 crc kubenswrapper[4814]: E0216 09:52:33.610568 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.610584 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 09:52:33 crc kubenswrapper[4814]: E0216 09:52:33.610597 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" containerName="installer" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.610603 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" containerName="installer" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.610734 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.610756 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f410ac3b-3f81-4ca4-8c09-70f312086d54" containerName="installer" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.611719 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.629618 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qrxcn"] Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.768403 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6a2f8066-0e53-4f49-ad72-83d1569a8bd4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qrxcn\" (UID: \"6a2f8066-0e53-4f49-ad72-83d1569a8bd4\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.768466 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsx45\" (UniqueName: \"kubernetes.io/projected/6a2f8066-0e53-4f49-ad72-83d1569a8bd4-kube-api-access-hsx45\") pod \"marketplace-operator-79b997595-qrxcn\" (UID: \"6a2f8066-0e53-4f49-ad72-83d1569a8bd4\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.768798 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a2f8066-0e53-4f49-ad72-83d1569a8bd4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qrxcn\" (UID: \"6a2f8066-0e53-4f49-ad72-83d1569a8bd4\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.869979 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a2f8066-0e53-4f49-ad72-83d1569a8bd4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qrxcn\" (UID: \"6a2f8066-0e53-4f49-ad72-83d1569a8bd4\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.870476 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6a2f8066-0e53-4f49-ad72-83d1569a8bd4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qrxcn\" (UID: \"6a2f8066-0e53-4f49-ad72-83d1569a8bd4\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.870504 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsx45\" (UniqueName: \"kubernetes.io/projected/6a2f8066-0e53-4f49-ad72-83d1569a8bd4-kube-api-access-hsx45\") pod \"marketplace-operator-79b997595-qrxcn\" (UID: \"6a2f8066-0e53-4f49-ad72-83d1569a8bd4\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.871702 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a2f8066-0e53-4f49-ad72-83d1569a8bd4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qrxcn\" (UID: \"6a2f8066-0e53-4f49-ad72-83d1569a8bd4\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.879383 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6a2f8066-0e53-4f49-ad72-83d1569a8bd4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qrxcn\" (UID: \"6a2f8066-0e53-4f49-ad72-83d1569a8bd4\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:33 crc kubenswrapper[4814]: I0216 09:52:33.887521 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsx45\" (UniqueName: \"kubernetes.io/projected/6a2f8066-0e53-4f49-ad72-83d1569a8bd4-kube-api-access-hsx45\") pod \"marketplace-operator-79b997595-qrxcn\" (UID: \"6a2f8066-0e53-4f49-ad72-83d1569a8bd4\") " pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.072190 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.076962 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cr82j_13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee/marketplace-operator/1.log" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.077049 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.087091 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.088160 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.279934 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-operator-metrics\") pod \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.280647 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2dhq\" (UniqueName: \"kubernetes.io/projected/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-kube-api-access-b2dhq\") pod \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.280755 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-utilities\") pod \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.280789 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28l9t\" (UniqueName: \"kubernetes.io/projected/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-kube-api-access-28l9t\") pod \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.280820 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlnh6\" (UniqueName: \"kubernetes.io/projected/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-kube-api-access-vlnh6\") pod \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.280844 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-utilities\") pod \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.280887 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-trusted-ca\") pod \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\" (UID: \"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.280920 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-catalog-content\") pod \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\" (UID: \"0857fc2a-4cdb-4f97-aca4-20a08fc1060a\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.280951 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-catalog-content\") pod \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\" (UID: \"3594c0fb-ca70-4560-ba53-a5e217a0ddf7\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.281686 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qrxcn"] Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.282277 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" (UID: "13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.282409 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-utilities" (OuterVolumeSpecName: "utilities") pod "0857fc2a-4cdb-4f97-aca4-20a08fc1060a" (UID: "0857fc2a-4cdb-4f97-aca4-20a08fc1060a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.282505 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-utilities" (OuterVolumeSpecName: "utilities") pod "3594c0fb-ca70-4560-ba53-a5e217a0ddf7" (UID: "3594c0fb-ca70-4560-ba53-a5e217a0ddf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.288520 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-kube-api-access-28l9t" (OuterVolumeSpecName: "kube-api-access-28l9t") pod "3594c0fb-ca70-4560-ba53-a5e217a0ddf7" (UID: "3594c0fb-ca70-4560-ba53-a5e217a0ddf7"). InnerVolumeSpecName "kube-api-access-28l9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.290926 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-kube-api-access-b2dhq" (OuterVolumeSpecName: "kube-api-access-b2dhq") pod "0857fc2a-4cdb-4f97-aca4-20a08fc1060a" (UID: "0857fc2a-4cdb-4f97-aca4-20a08fc1060a"). InnerVolumeSpecName "kube-api-access-b2dhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.291979 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-kube-api-access-vlnh6" (OuterVolumeSpecName: "kube-api-access-vlnh6") pod "13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" (UID: "13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee"). InnerVolumeSpecName "kube-api-access-vlnh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.292850 4814 generic.go:334] "Generic (PLEG): container finished" podID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerID="bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb" exitCode=0 Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.293024 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcv4r" event={"ID":"0857fc2a-4cdb-4f97-aca4-20a08fc1060a","Type":"ContainerDied","Data":"bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb"} Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.293081 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcv4r" event={"ID":"0857fc2a-4cdb-4f97-aca4-20a08fc1060a","Type":"ContainerDied","Data":"8e46ae868fd20131d4586ffb852053c7318cad5b39ff13b121a1aa5cafdee6cb"} Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.293110 4814 scope.go:117] "RemoveContainer" containerID="bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.293338 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcv4r" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.300319 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" (UID: "13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.318075 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0857fc2a-4cdb-4f97-aca4-20a08fc1060a" (UID: "0857fc2a-4cdb-4f97-aca4-20a08fc1060a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.318715 4814 generic.go:334] "Generic (PLEG): container finished" podID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerID="29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a" exitCode=0 Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.318830 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcts9" event={"ID":"3594c0fb-ca70-4560-ba53-a5e217a0ddf7","Type":"ContainerDied","Data":"29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a"} Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.318847 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dcts9" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.318873 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dcts9" event={"ID":"3594c0fb-ca70-4560-ba53-a5e217a0ddf7","Type":"ContainerDied","Data":"15783dc56790d7ee06eb4a3045985a2b8ff5dabe19d9c6dcb642969c6ec779bb"} Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.330356 4814 generic.go:334] "Generic (PLEG): container finished" podID="afb2178d-394e-4d6b-baf0-8242e537aa1e" containerID="acb6c8969b84de9afd1228ed85cc5a06a80013bd0b159352fd749b9fc82106b5" exitCode=0 Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.330451 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz6kn" event={"ID":"afb2178d-394e-4d6b-baf0-8242e537aa1e","Type":"ContainerDied","Data":"acb6c8969b84de9afd1228ed85cc5a06a80013bd0b159352fd749b9fc82106b5"} Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.343316 4814 scope.go:117] "RemoveContainer" containerID="a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.346439 4814 generic.go:334] "Generic (PLEG): container finished" podID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerID="38327da15d5e694ed57a377d1771379a33d904acd6627e8acd4bb22ba2c41bd3" exitCode=0 Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.346549 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngqwc" event={"ID":"b772d6e0-ae59-4ddb-b5f8-301ac88ec747","Type":"ContainerDied","Data":"38327da15d5e694ed57a377d1771379a33d904acd6627e8acd4bb22ba2c41bd3"} Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.357481 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3594c0fb-ca70-4560-ba53-a5e217a0ddf7" (UID: "3594c0fb-ca70-4560-ba53-a5e217a0ddf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.360068 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cr82j_13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee/marketplace-operator/1.log" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.360146 4814 generic.go:334] "Generic (PLEG): container finished" podID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerID="ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01" exitCode=0 Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.360193 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" event={"ID":"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee","Type":"ContainerDied","Data":"ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01"} Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.360233 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" event={"ID":"13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee","Type":"ContainerDied","Data":"7cc44e88b211f6cc8f071bea72de693b985a03480d73649ea54abfa8a8c0f94f"} Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.360232 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cr82j" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.381758 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.381799 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28l9t\" (UniqueName: \"kubernetes.io/projected/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-kube-api-access-28l9t\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.381814 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlnh6\" (UniqueName: \"kubernetes.io/projected/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-kube-api-access-vlnh6\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.381828 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.381841 4814 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.381853 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.381867 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3594c0fb-ca70-4560-ba53-a5e217a0ddf7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.381877 4814 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.381889 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2dhq\" (UniqueName: \"kubernetes.io/projected/0857fc2a-4cdb-4f97-aca4-20a08fc1060a-kube-api-access-b2dhq\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.387498 4814 scope.go:117] "RemoveContainer" containerID="88929228e08268f432c3b6245273809159eae7db169edaeea9818d2cea2cf6c1" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.428823 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cr82j"] Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.430836 4814 scope.go:117] "RemoveContainer" containerID="bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb" Feb 16 09:52:34 crc kubenswrapper[4814]: E0216 09:52:34.431445 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb\": container with ID starting with bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb not found: ID does not exist" containerID="bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.431481 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb"} err="failed to get container status \"bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb\": rpc error: code = NotFound desc = could not find container \"bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb\": container with ID starting with bc63527e17e3070362a5be28e089810f6f1c3ed7c07d0ff53da8fca6024504fb not found: ID does not exist" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.431504 4814 scope.go:117] "RemoveContainer" containerID="a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0" Feb 16 09:52:34 crc kubenswrapper[4814]: E0216 09:52:34.431924 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0\": container with ID starting with a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0 not found: ID does not exist" containerID="a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.431975 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0"} err="failed to get container status \"a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0\": rpc error: code = NotFound desc = could not find container \"a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0\": container with ID starting with a1de38a639b8fb8faf5d5bb288518c981522533051ae6dfa765d97b992b292b0 not found: ID does not exist" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.431995 4814 scope.go:117] "RemoveContainer" containerID="88929228e08268f432c3b6245273809159eae7db169edaeea9818d2cea2cf6c1" Feb 16 09:52:34 crc kubenswrapper[4814]: E0216 09:52:34.434261 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88929228e08268f432c3b6245273809159eae7db169edaeea9818d2cea2cf6c1\": container with ID starting with 88929228e08268f432c3b6245273809159eae7db169edaeea9818d2cea2cf6c1 not found: ID does not exist" containerID="88929228e08268f432c3b6245273809159eae7db169edaeea9818d2cea2cf6c1" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.434290 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88929228e08268f432c3b6245273809159eae7db169edaeea9818d2cea2cf6c1"} err="failed to get container status \"88929228e08268f432c3b6245273809159eae7db169edaeea9818d2cea2cf6c1\": rpc error: code = NotFound desc = could not find container \"88929228e08268f432c3b6245273809159eae7db169edaeea9818d2cea2cf6c1\": container with ID starting with 88929228e08268f432c3b6245273809159eae7db169edaeea9818d2cea2cf6c1 not found: ID does not exist" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.434303 4814 scope.go:117] "RemoveContainer" containerID="29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.434388 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cr82j"] Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.457214 4814 scope.go:117] "RemoveContainer" containerID="47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.474567 4814 scope.go:117] "RemoveContainer" containerID="e3ce23ad16efd8ea46597af363e64f2d98479073e02b8e980006226c9c4e1f62" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.492514 4814 scope.go:117] "RemoveContainer" containerID="29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a" Feb 16 09:52:34 crc kubenswrapper[4814]: E0216 09:52:34.493247 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a\": container with ID starting with 29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a not found: ID does not exist" containerID="29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.493290 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a"} err="failed to get container status \"29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a\": rpc error: code = NotFound desc = could not find container \"29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a\": container with ID starting with 29a88082ead40210024cc9accb165fae4b794184b9ad82d8bc7e17d7b113740a not found: ID does not exist" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.493325 4814 scope.go:117] "RemoveContainer" containerID="47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353" Feb 16 09:52:34 crc kubenswrapper[4814]: E0216 09:52:34.493818 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353\": container with ID starting with 47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353 not found: ID does not exist" containerID="47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.493845 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353"} err="failed to get container status \"47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353\": rpc error: code = NotFound desc = could not find container \"47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353\": container with ID starting with 47a8664afc2fd69da05a5a1bbe8dd0742906cb92de36a562c0e6440206d29353 not found: ID does not exist" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.493862 4814 scope.go:117] "RemoveContainer" containerID="e3ce23ad16efd8ea46597af363e64f2d98479073e02b8e980006226c9c4e1f62" Feb 16 09:52:34 crc kubenswrapper[4814]: E0216 09:52:34.494297 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ce23ad16efd8ea46597af363e64f2d98479073e02b8e980006226c9c4e1f62\": container with ID starting with e3ce23ad16efd8ea46597af363e64f2d98479073e02b8e980006226c9c4e1f62 not found: ID does not exist" containerID="e3ce23ad16efd8ea46597af363e64f2d98479073e02b8e980006226c9c4e1f62" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.494320 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ce23ad16efd8ea46597af363e64f2d98479073e02b8e980006226c9c4e1f62"} err="failed to get container status \"e3ce23ad16efd8ea46597af363e64f2d98479073e02b8e980006226c9c4e1f62\": rpc error: code = NotFound desc = could not find container \"e3ce23ad16efd8ea46597af363e64f2d98479073e02b8e980006226c9c4e1f62\": container with ID starting with e3ce23ad16efd8ea46597af363e64f2d98479073e02b8e980006226c9c4e1f62 not found: ID does not exist" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.494335 4814 scope.go:117] "RemoveContainer" containerID="ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.508279 4814 scope.go:117] "RemoveContainer" containerID="8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.519090 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.525849 4814 scope.go:117] "RemoveContainer" containerID="ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01" Feb 16 09:52:34 crc kubenswrapper[4814]: E0216 09:52:34.526327 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01\": container with ID starting with ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01 not found: ID does not exist" containerID="ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.526355 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01"} err="failed to get container status \"ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01\": rpc error: code = NotFound desc = could not find container \"ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01\": container with ID starting with ba8e1fe2b41ead277a8b847eecabeb8b5708fccccd2969c95df077d619142b01 not found: ID does not exist" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.526376 4814 scope.go:117] "RemoveContainer" containerID="8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b" Feb 16 09:52:34 crc kubenswrapper[4814]: E0216 09:52:34.526731 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b\": container with ID starting with 8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b not found: ID does not exist" containerID="8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.526787 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b"} err="failed to get container status \"8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b\": rpc error: code = NotFound desc = could not find container \"8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b\": container with ID starting with 8bed608b7a7830521a8cb2c2c3c66fc9b681b6e00992aa18d61357186c8bcd5b not found: ID does not exist" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.568884 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.639878 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcv4r"] Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.650019 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcv4r"] Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.663128 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dcts9"] Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.667260 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dcts9"] Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.688642 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq697\" (UniqueName: \"kubernetes.io/projected/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-kube-api-access-hq697\") pod \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.688700 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-utilities\") pod \"afb2178d-394e-4d6b-baf0-8242e537aa1e\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.688733 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-catalog-content\") pod \"afb2178d-394e-4d6b-baf0-8242e537aa1e\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.688798 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-catalog-content\") pod \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.688856 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-utilities\") pod \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\" (UID: \"b772d6e0-ae59-4ddb-b5f8-301ac88ec747\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.688875 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgjtv\" (UniqueName: \"kubernetes.io/projected/afb2178d-394e-4d6b-baf0-8242e537aa1e-kube-api-access-rgjtv\") pod \"afb2178d-394e-4d6b-baf0-8242e537aa1e\" (UID: \"afb2178d-394e-4d6b-baf0-8242e537aa1e\") " Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.690570 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-utilities" (OuterVolumeSpecName: "utilities") pod "afb2178d-394e-4d6b-baf0-8242e537aa1e" (UID: "afb2178d-394e-4d6b-baf0-8242e537aa1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.690795 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-utilities" (OuterVolumeSpecName: "utilities") pod "b772d6e0-ae59-4ddb-b5f8-301ac88ec747" (UID: "b772d6e0-ae59-4ddb-b5f8-301ac88ec747"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.692856 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb2178d-394e-4d6b-baf0-8242e537aa1e-kube-api-access-rgjtv" (OuterVolumeSpecName: "kube-api-access-rgjtv") pod "afb2178d-394e-4d6b-baf0-8242e537aa1e" (UID: "afb2178d-394e-4d6b-baf0-8242e537aa1e"). InnerVolumeSpecName "kube-api-access-rgjtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.693384 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-kube-api-access-hq697" (OuterVolumeSpecName: "kube-api-access-hq697") pod "b772d6e0-ae59-4ddb-b5f8-301ac88ec747" (UID: "b772d6e0-ae59-4ddb-b5f8-301ac88ec747"). InnerVolumeSpecName "kube-api-access-hq697". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.744436 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b772d6e0-ae59-4ddb-b5f8-301ac88ec747" (UID: "b772d6e0-ae59-4ddb-b5f8-301ac88ec747"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.790912 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.790945 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgjtv\" (UniqueName: \"kubernetes.io/projected/afb2178d-394e-4d6b-baf0-8242e537aa1e-kube-api-access-rgjtv\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.790957 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq697\" (UniqueName: \"kubernetes.io/projected/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-kube-api-access-hq697\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.790965 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.790977 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b772d6e0-ae59-4ddb-b5f8-301ac88ec747-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.849813 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afb2178d-394e-4d6b-baf0-8242e537aa1e" (UID: "afb2178d-394e-4d6b-baf0-8242e537aa1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:52:34 crc kubenswrapper[4814]: I0216 09:52:34.892310 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afb2178d-394e-4d6b-baf0-8242e537aa1e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.001370 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" path="/var/lib/kubelet/pods/0857fc2a-4cdb-4f97-aca4-20a08fc1060a/volumes" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.002354 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" path="/var/lib/kubelet/pods/13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee/volumes" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.003085 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" path="/var/lib/kubelet/pods/3594c0fb-ca70-4560-ba53-a5e217a0ddf7/volumes" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.111767 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pcwpm"] Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.111970 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.111984 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.111996 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb2178d-394e-4d6b-baf0-8242e537aa1e" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112003 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb2178d-394e-4d6b-baf0-8242e537aa1e" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112011 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerName="extract-content" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112018 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerName="extract-content" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112030 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb2178d-394e-4d6b-baf0-8242e537aa1e" containerName="extract-utilities" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112037 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb2178d-394e-4d6b-baf0-8242e537aa1e" containerName="extract-utilities" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112046 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerName="extract-content" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112051 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerName="extract-content" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112059 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerName="extract-utilities" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112066 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerName="extract-utilities" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112073 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112079 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112088 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112093 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112103 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerName="extract-utilities" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112108 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerName="extract-utilities" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112116 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerName="extract-content" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112123 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerName="extract-content" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112131 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112137 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112146 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112153 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112163 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112170 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112178 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb2178d-394e-4d6b-baf0-8242e537aa1e" containerName="extract-content" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112185 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb2178d-394e-4d6b-baf0-8242e537aa1e" containerName="extract-content" Feb 16 09:52:35 crc kubenswrapper[4814]: E0216 09:52:35.112196 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerName="extract-utilities" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112202 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerName="extract-utilities" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112298 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="0857fc2a-4cdb-4f97-aca4-20a08fc1060a" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112309 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112317 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112327 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="13475cfa-cd0b-4a11-ac73-e2a8ae27c1ee" containerName="marketplace-operator" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112335 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112343 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb2178d-394e-4d6b-baf0-8242e537aa1e" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112350 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="3594c0fb-ca70-4560-ba53-a5e217a0ddf7" containerName="registry-server" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.112746 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.153253 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pcwpm"] Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.296884 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45nmp\" (UniqueName: \"kubernetes.io/projected/b73668d4-5f49-40b8-945c-3e4e58a82cef-kube-api-access-45nmp\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.296944 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b73668d4-5f49-40b8-945c-3e4e58a82cef-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.296978 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b73668d4-5f49-40b8-945c-3e4e58a82cef-bound-sa-token\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.297007 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b73668d4-5f49-40b8-945c-3e4e58a82cef-trusted-ca\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.297041 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.297243 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b73668d4-5f49-40b8-945c-3e4e58a82cef-registry-certificates\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.297512 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b73668d4-5f49-40b8-945c-3e4e58a82cef-registry-tls\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.297754 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b73668d4-5f49-40b8-945c-3e4e58a82cef-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.318268 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.367875 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kz6kn" event={"ID":"afb2178d-394e-4d6b-baf0-8242e537aa1e","Type":"ContainerDied","Data":"4c6ffd85536ac25266685f5c6f5b7d8c54315787c7003e78507603d3663c4e12"} Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.367943 4814 scope.go:117] "RemoveContainer" containerID="acb6c8969b84de9afd1228ed85cc5a06a80013bd0b159352fd749b9fc82106b5" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.368087 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kz6kn" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.373751 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ngqwc" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.374200 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ngqwc" event={"ID":"b772d6e0-ae59-4ddb-b5f8-301ac88ec747","Type":"ContainerDied","Data":"bf1b2990e7a25c140e836e86b341c9c5acf4c749eac6c17685c91a0f3cb4f4a7"} Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.376062 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" event={"ID":"6a2f8066-0e53-4f49-ad72-83d1569a8bd4","Type":"ContainerStarted","Data":"53bd28b15af64e97a7fc8c27b68a318b4e35a2d8a4d86719cd8501d6461fdff1"} Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.376094 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" event={"ID":"6a2f8066-0e53-4f49-ad72-83d1569a8bd4","Type":"ContainerStarted","Data":"394c0648d442501d97b382ed5751a7201185c44c7a342c5b5c79d4b45d042a6a"} Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.376446 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.383897 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.396208 4814 scope.go:117] "RemoveContainer" containerID="382e33f4260ddb31c9c26640c6440ad298f96b2ae86a96314f217855b9454dd0" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.397143 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kz6kn"] Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.398650 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b73668d4-5f49-40b8-945c-3e4e58a82cef-registry-tls\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.398695 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b73668d4-5f49-40b8-945c-3e4e58a82cef-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.398726 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45nmp\" (UniqueName: \"kubernetes.io/projected/b73668d4-5f49-40b8-945c-3e4e58a82cef-kube-api-access-45nmp\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.398752 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b73668d4-5f49-40b8-945c-3e4e58a82cef-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.398772 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b73668d4-5f49-40b8-945c-3e4e58a82cef-bound-sa-token\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.398798 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b73668d4-5f49-40b8-945c-3e4e58a82cef-trusted-ca\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.398825 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b73668d4-5f49-40b8-945c-3e4e58a82cef-registry-certificates\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.399893 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b73668d4-5f49-40b8-945c-3e4e58a82cef-registry-certificates\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.402057 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b73668d4-5f49-40b8-945c-3e4e58a82cef-trusted-ca\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.404173 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b73668d4-5f49-40b8-945c-3e4e58a82cef-registry-tls\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.413506 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b73668d4-5f49-40b8-945c-3e4e58a82cef-ca-trust-extracted\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.416415 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kz6kn"] Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.423815 4814 scope.go:117] "RemoveContainer" containerID="2bc27b0bc0360de9722db81828f289841705b9e35d296b3b777d3ecc936515f7" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.424737 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b73668d4-5f49-40b8-945c-3e4e58a82cef-installation-pull-secrets\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.428659 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b73668d4-5f49-40b8-945c-3e4e58a82cef-bound-sa-token\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.431753 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45nmp\" (UniqueName: \"kubernetes.io/projected/b73668d4-5f49-40b8-945c-3e4e58a82cef-kube-api-access-45nmp\") pod \"image-registry-66df7c8f76-pcwpm\" (UID: \"b73668d4-5f49-40b8-945c-3e4e58a82cef\") " pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.464182 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qrxcn" podStartSLOduration=2.464147387 podStartE2EDuration="2.464147387s" podCreationTimestamp="2026-02-16 09:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:52:35.43191318 +0000 UTC m=+413.125069380" watchObservedRunningTime="2026-02-16 09:52:35.464147387 +0000 UTC m=+413.157303567" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.467677 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ngqwc"] Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.471846 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ngqwc"] Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.474912 4814 scope.go:117] "RemoveContainer" containerID="38327da15d5e694ed57a377d1771379a33d904acd6627e8acd4bb22ba2c41bd3" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.489058 4814 scope.go:117] "RemoveContainer" containerID="30805689cff971906f23f442187acc37ead20fa3028cda5c66f0dbc1391f4c94" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.505641 4814 scope.go:117] "RemoveContainer" containerID="0a179a2c3b3e0e415dd88007fe8724483bd061d8f85fb25a7e6cfa68bd258421" Feb 16 09:52:35 crc kubenswrapper[4814]: I0216 09:52:35.729956 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:36 crc kubenswrapper[4814]: I0216 09:52:35.999660 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-pcwpm"] Feb 16 09:52:36 crc kubenswrapper[4814]: I0216 09:52:36.389883 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" event={"ID":"b73668d4-5f49-40b8-945c-3e4e58a82cef","Type":"ContainerStarted","Data":"d3b6b88e50d33c1e32f87b2f4b7df4ec161807452f4965b4924e3d7810db0ba1"} Feb 16 09:52:36 crc kubenswrapper[4814]: I0216 09:52:36.389943 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" event={"ID":"b73668d4-5f49-40b8-945c-3e4e58a82cef","Type":"ContainerStarted","Data":"f24ef487ea402335508f53fcd451630160e38e5b99c9ba2c139438fba423c221"} Feb 16 09:52:36 crc kubenswrapper[4814]: I0216 09:52:36.391074 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:36 crc kubenswrapper[4814]: I0216 09:52:36.419697 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" podStartSLOduration=1.419674558 podStartE2EDuration="1.419674558s" podCreationTimestamp="2026-02-16 09:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:52:36.415263309 +0000 UTC m=+414.108419489" watchObservedRunningTime="2026-02-16 09:52:36.419674558 +0000 UTC m=+414.112830738" Feb 16 09:52:37 crc kubenswrapper[4814]: I0216 09:52:37.004460 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb2178d-394e-4d6b-baf0-8242e537aa1e" path="/var/lib/kubelet/pods/afb2178d-394e-4d6b-baf0-8242e537aa1e/volumes" Feb 16 09:52:37 crc kubenswrapper[4814]: I0216 09:52:37.005133 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b772d6e0-ae59-4ddb-b5f8-301ac88ec747" path="/var/lib/kubelet/pods/b772d6e0-ae59-4ddb-b5f8-301ac88ec747/volumes" Feb 16 09:52:37 crc kubenswrapper[4814]: I0216 09:52:37.960220 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:52:37 crc kubenswrapper[4814]: I0216 09:52:37.960640 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:52:37 crc kubenswrapper[4814]: I0216 09:52:37.960692 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:52:37 crc kubenswrapper[4814]: I0216 09:52:37.961352 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77fad05b79ecca7c319e23468d7a63b9cba584ba0b7e81b7c171315d92fc9506"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 09:52:37 crc kubenswrapper[4814]: I0216 09:52:37.961406 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://77fad05b79ecca7c319e23468d7a63b9cba584ba0b7e81b7c171315d92fc9506" gracePeriod=600 Feb 16 09:52:38 crc kubenswrapper[4814]: I0216 09:52:38.411412 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="77fad05b79ecca7c319e23468d7a63b9cba584ba0b7e81b7c171315d92fc9506" exitCode=0 Feb 16 09:52:38 crc kubenswrapper[4814]: I0216 09:52:38.411523 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"77fad05b79ecca7c319e23468d7a63b9cba584ba0b7e81b7c171315d92fc9506"} Feb 16 09:52:38 crc kubenswrapper[4814]: I0216 09:52:38.412001 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"2e13b721f455f43965f9fa3ab22df7aa7002ec343bfde28e25849e50a929cccb"} Feb 16 09:52:38 crc kubenswrapper[4814]: I0216 09:52:38.412036 4814 scope.go:117] "RemoveContainer" containerID="0a484d7d777521972d52a5defaea1f80b690329155679f02b643128e5e94594a" Feb 16 09:52:48 crc kubenswrapper[4814]: I0216 09:52:48.718123 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-thm8v"] Feb 16 09:52:48 crc kubenswrapper[4814]: I0216 09:52:48.720575 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:48 crc kubenswrapper[4814]: I0216 09:52:48.723420 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 09:52:48 crc kubenswrapper[4814]: I0216 09:52:48.733961 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-thm8v"] Feb 16 09:52:48 crc kubenswrapper[4814]: I0216 09:52:48.904087 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5-catalog-content\") pod \"community-operators-thm8v\" (UID: \"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5\") " pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:48 crc kubenswrapper[4814]: I0216 09:52:48.904157 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jls8\" (UniqueName: \"kubernetes.io/projected/4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5-kube-api-access-5jls8\") pod \"community-operators-thm8v\" (UID: \"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5\") " pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:48 crc kubenswrapper[4814]: I0216 09:52:48.904411 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5-utilities\") pod \"community-operators-thm8v\" (UID: \"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5\") " pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:48 crc kubenswrapper[4814]: I0216 09:52:48.920968 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s6k7j"] Feb 16 09:52:48 crc kubenswrapper[4814]: I0216 09:52:48.922490 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:48 crc kubenswrapper[4814]: I0216 09:52:48.926347 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 09:52:48 crc kubenswrapper[4814]: I0216 09:52:48.927933 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6k7j"] Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.006130 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5-utilities\") pod \"community-operators-thm8v\" (UID: \"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5\") " pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.006216 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5-catalog-content\") pod \"community-operators-thm8v\" (UID: \"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5\") " pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.006263 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad487dcb-3042-4cfe-abe7-0c9df7cc212c-catalog-content\") pod \"redhat-marketplace-s6k7j\" (UID: \"ad487dcb-3042-4cfe-abe7-0c9df7cc212c\") " pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.006299 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jls8\" (UniqueName: \"kubernetes.io/projected/4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5-kube-api-access-5jls8\") pod \"community-operators-thm8v\" (UID: \"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5\") " pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.006353 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gswk9\" (UniqueName: \"kubernetes.io/projected/ad487dcb-3042-4cfe-abe7-0c9df7cc212c-kube-api-access-gswk9\") pod \"redhat-marketplace-s6k7j\" (UID: \"ad487dcb-3042-4cfe-abe7-0c9df7cc212c\") " pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.006501 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad487dcb-3042-4cfe-abe7-0c9df7cc212c-utilities\") pod \"redhat-marketplace-s6k7j\" (UID: \"ad487dcb-3042-4cfe-abe7-0c9df7cc212c\") " pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.006858 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5-utilities\") pod \"community-operators-thm8v\" (UID: \"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5\") " pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.007158 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5-catalog-content\") pod \"community-operators-thm8v\" (UID: \"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5\") " pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.029768 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jls8\" (UniqueName: \"kubernetes.io/projected/4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5-kube-api-access-5jls8\") pod \"community-operators-thm8v\" (UID: \"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5\") " pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.045140 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.107653 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad487dcb-3042-4cfe-abe7-0c9df7cc212c-catalog-content\") pod \"redhat-marketplace-s6k7j\" (UID: \"ad487dcb-3042-4cfe-abe7-0c9df7cc212c\") " pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.107733 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gswk9\" (UniqueName: \"kubernetes.io/projected/ad487dcb-3042-4cfe-abe7-0c9df7cc212c-kube-api-access-gswk9\") pod \"redhat-marketplace-s6k7j\" (UID: \"ad487dcb-3042-4cfe-abe7-0c9df7cc212c\") " pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.107770 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad487dcb-3042-4cfe-abe7-0c9df7cc212c-utilities\") pod \"redhat-marketplace-s6k7j\" (UID: \"ad487dcb-3042-4cfe-abe7-0c9df7cc212c\") " pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.108434 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad487dcb-3042-4cfe-abe7-0c9df7cc212c-utilities\") pod \"redhat-marketplace-s6k7j\" (UID: \"ad487dcb-3042-4cfe-abe7-0c9df7cc212c\") " pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.108983 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad487dcb-3042-4cfe-abe7-0c9df7cc212c-catalog-content\") pod \"redhat-marketplace-s6k7j\" (UID: \"ad487dcb-3042-4cfe-abe7-0c9df7cc212c\") " pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.130613 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gswk9\" (UniqueName: \"kubernetes.io/projected/ad487dcb-3042-4cfe-abe7-0c9df7cc212c-kube-api-access-gswk9\") pod \"redhat-marketplace-s6k7j\" (UID: \"ad487dcb-3042-4cfe-abe7-0c9df7cc212c\") " pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.248455 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.530161 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-thm8v"] Feb 16 09:52:49 crc kubenswrapper[4814]: I0216 09:52:49.659497 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s6k7j"] Feb 16 09:52:49 crc kubenswrapper[4814]: W0216 09:52:49.667750 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad487dcb_3042_4cfe_abe7_0c9df7cc212c.slice/crio-aa63cf71766d9e298b396e0b1288aec5d817f4542af84fa4129a1b3bbc1955a0 WatchSource:0}: Error finding container aa63cf71766d9e298b396e0b1288aec5d817f4542af84fa4129a1b3bbc1955a0: Status 404 returned error can't find the container with id aa63cf71766d9e298b396e0b1288aec5d817f4542af84fa4129a1b3bbc1955a0 Feb 16 09:52:50 crc kubenswrapper[4814]: I0216 09:52:50.498606 4814 generic.go:334] "Generic (PLEG): container finished" podID="ad487dcb-3042-4cfe-abe7-0c9df7cc212c" containerID="cecc34f9371058e427c58aab9c981e7a3a921e7910315afa3037388cebab8fc4" exitCode=0 Feb 16 09:52:50 crc kubenswrapper[4814]: I0216 09:52:50.498740 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6k7j" event={"ID":"ad487dcb-3042-4cfe-abe7-0c9df7cc212c","Type":"ContainerDied","Data":"cecc34f9371058e427c58aab9c981e7a3a921e7910315afa3037388cebab8fc4"} Feb 16 09:52:50 crc kubenswrapper[4814]: I0216 09:52:50.499171 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6k7j" event={"ID":"ad487dcb-3042-4cfe-abe7-0c9df7cc212c","Type":"ContainerStarted","Data":"aa63cf71766d9e298b396e0b1288aec5d817f4542af84fa4129a1b3bbc1955a0"} Feb 16 09:52:50 crc kubenswrapper[4814]: I0216 09:52:50.501098 4814 generic.go:334] "Generic (PLEG): container finished" podID="4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5" containerID="cae26073ba0f784bcea68a20ce9c2749dcf31954fa508211841b32347cc507c0" exitCode=0 Feb 16 09:52:50 crc kubenswrapper[4814]: I0216 09:52:50.501161 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thm8v" event={"ID":"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5","Type":"ContainerDied","Data":"cae26073ba0f784bcea68a20ce9c2749dcf31954fa508211841b32347cc507c0"} Feb 16 09:52:50 crc kubenswrapper[4814]: I0216 09:52:50.501203 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thm8v" event={"ID":"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5","Type":"ContainerStarted","Data":"a7c374a72cbffab9df07861e40804787030b0e4e13141b93c2a82cab3b068a13"} Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.118641 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mxvf2"] Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.121686 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.128305 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.129836 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mxvf2"] Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.241164 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eb190c6-74c7-4b35-b748-ece1660772f1-utilities\") pod \"redhat-operators-mxvf2\" (UID: \"5eb190c6-74c7-4b35-b748-ece1660772f1\") " pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.241234 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eb190c6-74c7-4b35-b748-ece1660772f1-catalog-content\") pod \"redhat-operators-mxvf2\" (UID: \"5eb190c6-74c7-4b35-b748-ece1660772f1\") " pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.241273 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94jlv\" (UniqueName: \"kubernetes.io/projected/5eb190c6-74c7-4b35-b748-ece1660772f1-kube-api-access-94jlv\") pod \"redhat-operators-mxvf2\" (UID: \"5eb190c6-74c7-4b35-b748-ece1660772f1\") " pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.312987 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j5d9z"] Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.325224 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.328341 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.340503 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j5d9z"] Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.344122 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eb190c6-74c7-4b35-b748-ece1660772f1-utilities\") pod \"redhat-operators-mxvf2\" (UID: \"5eb190c6-74c7-4b35-b748-ece1660772f1\") " pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.344253 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eb190c6-74c7-4b35-b748-ece1660772f1-catalog-content\") pod \"redhat-operators-mxvf2\" (UID: \"5eb190c6-74c7-4b35-b748-ece1660772f1\") " pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.344309 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94jlv\" (UniqueName: \"kubernetes.io/projected/5eb190c6-74c7-4b35-b748-ece1660772f1-kube-api-access-94jlv\") pod \"redhat-operators-mxvf2\" (UID: \"5eb190c6-74c7-4b35-b748-ece1660772f1\") " pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.345312 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eb190c6-74c7-4b35-b748-ece1660772f1-utilities\") pod \"redhat-operators-mxvf2\" (UID: \"5eb190c6-74c7-4b35-b748-ece1660772f1\") " pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.345598 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eb190c6-74c7-4b35-b748-ece1660772f1-catalog-content\") pod \"redhat-operators-mxvf2\" (UID: \"5eb190c6-74c7-4b35-b748-ece1660772f1\") " pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.372892 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94jlv\" (UniqueName: \"kubernetes.io/projected/5eb190c6-74c7-4b35-b748-ece1660772f1-kube-api-access-94jlv\") pod \"redhat-operators-mxvf2\" (UID: \"5eb190c6-74c7-4b35-b748-ece1660772f1\") " pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.445923 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690c572b-3be5-4f1d-bb8b-c618d3e9e6d5-utilities\") pod \"certified-operators-j5d9z\" (UID: \"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5\") " pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.446029 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9jr9\" (UniqueName: \"kubernetes.io/projected/690c572b-3be5-4f1d-bb8b-c618d3e9e6d5-kube-api-access-n9jr9\") pod \"certified-operators-j5d9z\" (UID: \"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5\") " pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.446141 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690c572b-3be5-4f1d-bb8b-c618d3e9e6d5-catalog-content\") pod \"certified-operators-j5d9z\" (UID: \"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5\") " pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.473834 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.509343 4814 generic.go:334] "Generic (PLEG): container finished" podID="ad487dcb-3042-4cfe-abe7-0c9df7cc212c" containerID="37c2f30cde52007b69a57bdff282b0900c4fa9be9be3bf3f5793eae607814b72" exitCode=0 Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.509409 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6k7j" event={"ID":"ad487dcb-3042-4cfe-abe7-0c9df7cc212c","Type":"ContainerDied","Data":"37c2f30cde52007b69a57bdff282b0900c4fa9be9be3bf3f5793eae607814b72"} Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.511229 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thm8v" event={"ID":"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5","Type":"ContainerStarted","Data":"00adacdcbd031ea9f1ff9dc8bd05ecdcf054a2106998cd2323e84893ee9dd60e"} Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.547694 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9jr9\" (UniqueName: \"kubernetes.io/projected/690c572b-3be5-4f1d-bb8b-c618d3e9e6d5-kube-api-access-n9jr9\") pod \"certified-operators-j5d9z\" (UID: \"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5\") " pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.548134 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690c572b-3be5-4f1d-bb8b-c618d3e9e6d5-catalog-content\") pod \"certified-operators-j5d9z\" (UID: \"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5\") " pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.548194 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690c572b-3be5-4f1d-bb8b-c618d3e9e6d5-utilities\") pod \"certified-operators-j5d9z\" (UID: \"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5\") " pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.548854 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690c572b-3be5-4f1d-bb8b-c618d3e9e6d5-catalog-content\") pod \"certified-operators-j5d9z\" (UID: \"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5\") " pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.549166 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690c572b-3be5-4f1d-bb8b-c618d3e9e6d5-utilities\") pod \"certified-operators-j5d9z\" (UID: \"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5\") " pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.569769 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9jr9\" (UniqueName: \"kubernetes.io/projected/690c572b-3be5-4f1d-bb8b-c618d3e9e6d5-kube-api-access-n9jr9\") pod \"certified-operators-j5d9z\" (UID: \"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5\") " pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.662360 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:52:51 crc kubenswrapper[4814]: I0216 09:52:51.929569 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mxvf2"] Feb 16 09:52:52 crc kubenswrapper[4814]: I0216 09:52:52.092295 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j5d9z"] Feb 16 09:52:52 crc kubenswrapper[4814]: W0216 09:52:52.101297 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod690c572b_3be5_4f1d_bb8b_c618d3e9e6d5.slice/crio-7f70460f6aa7c14c01bcabf54219f2014284cd43c89a3bef62ae3e3bc40195ad WatchSource:0}: Error finding container 7f70460f6aa7c14c01bcabf54219f2014284cd43c89a3bef62ae3e3bc40195ad: Status 404 returned error can't find the container with id 7f70460f6aa7c14c01bcabf54219f2014284cd43c89a3bef62ae3e3bc40195ad Feb 16 09:52:52 crc kubenswrapper[4814]: I0216 09:52:52.518994 4814 generic.go:334] "Generic (PLEG): container finished" podID="5eb190c6-74c7-4b35-b748-ece1660772f1" containerID="5c380d373398ff31cbc237dd35bf2be970f30c37554dc6ab7ed8141514b8d2dd" exitCode=0 Feb 16 09:52:52 crc kubenswrapper[4814]: I0216 09:52:52.519107 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxvf2" event={"ID":"5eb190c6-74c7-4b35-b748-ece1660772f1","Type":"ContainerDied","Data":"5c380d373398ff31cbc237dd35bf2be970f30c37554dc6ab7ed8141514b8d2dd"} Feb 16 09:52:52 crc kubenswrapper[4814]: I0216 09:52:52.519658 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxvf2" event={"ID":"5eb190c6-74c7-4b35-b748-ece1660772f1","Type":"ContainerStarted","Data":"c05ca366e2ce3a064e98cc3bb70ea1934663534c4b632c5461564aef2dcc3a3f"} Feb 16 09:52:52 crc kubenswrapper[4814]: I0216 09:52:52.522027 4814 generic.go:334] "Generic (PLEG): container finished" podID="690c572b-3be5-4f1d-bb8b-c618d3e9e6d5" containerID="0db9eddba8a0dfca298ea6d068e076c3f952de7f20719d524d3d38dea545e2d8" exitCode=0 Feb 16 09:52:52 crc kubenswrapper[4814]: I0216 09:52:52.522064 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j5d9z" event={"ID":"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5","Type":"ContainerDied","Data":"0db9eddba8a0dfca298ea6d068e076c3f952de7f20719d524d3d38dea545e2d8"} Feb 16 09:52:52 crc kubenswrapper[4814]: I0216 09:52:52.522094 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j5d9z" event={"ID":"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5","Type":"ContainerStarted","Data":"7f70460f6aa7c14c01bcabf54219f2014284cd43c89a3bef62ae3e3bc40195ad"} Feb 16 09:52:52 crc kubenswrapper[4814]: I0216 09:52:52.527726 4814 generic.go:334] "Generic (PLEG): container finished" podID="4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5" containerID="00adacdcbd031ea9f1ff9dc8bd05ecdcf054a2106998cd2323e84893ee9dd60e" exitCode=0 Feb 16 09:52:52 crc kubenswrapper[4814]: I0216 09:52:52.527773 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thm8v" event={"ID":"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5","Type":"ContainerDied","Data":"00adacdcbd031ea9f1ff9dc8bd05ecdcf054a2106998cd2323e84893ee9dd60e"} Feb 16 09:52:52 crc kubenswrapper[4814]: I0216 09:52:52.531157 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s6k7j" event={"ID":"ad487dcb-3042-4cfe-abe7-0c9df7cc212c","Type":"ContainerStarted","Data":"0c327e03428f9ffa43fe9982ef39b93bee8f2d98acf3df6dcb2f5cf4aa6d5f92"} Feb 16 09:52:52 crc kubenswrapper[4814]: I0216 09:52:52.588720 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s6k7j" podStartSLOduration=3.132089416 podStartE2EDuration="4.588695634s" podCreationTimestamp="2026-02-16 09:52:48 +0000 UTC" firstStartedPulling="2026-02-16 09:52:50.50116098 +0000 UTC m=+428.194317160" lastFinishedPulling="2026-02-16 09:52:51.957767198 +0000 UTC m=+429.650923378" observedRunningTime="2026-02-16 09:52:52.585708214 +0000 UTC m=+430.278864404" watchObservedRunningTime="2026-02-16 09:52:52.588695634 +0000 UTC m=+430.281851814" Feb 16 09:52:53 crc kubenswrapper[4814]: I0216 09:52:53.539007 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxvf2" event={"ID":"5eb190c6-74c7-4b35-b748-ece1660772f1","Type":"ContainerStarted","Data":"807e77ac70beb78554253aec5f5d1fcc3b2df9b6c41651194e7bd2d83088c2ec"} Feb 16 09:52:53 crc kubenswrapper[4814]: I0216 09:52:53.559362 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thm8v" event={"ID":"4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5","Type":"ContainerStarted","Data":"14b36b2844101f2a74ddaac3481d37952f90896ddc3d00482e21a9effe2e4bfe"} Feb 16 09:52:53 crc kubenswrapper[4814]: I0216 09:52:53.605040 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-thm8v" podStartSLOduration=3.104587105 podStartE2EDuration="5.605015492s" podCreationTimestamp="2026-02-16 09:52:48 +0000 UTC" firstStartedPulling="2026-02-16 09:52:50.506303988 +0000 UTC m=+428.199460168" lastFinishedPulling="2026-02-16 09:52:53.006732375 +0000 UTC m=+430.699888555" observedRunningTime="2026-02-16 09:52:53.598193509 +0000 UTC m=+431.291349689" watchObservedRunningTime="2026-02-16 09:52:53.605015492 +0000 UTC m=+431.298171672" Feb 16 09:52:54 crc kubenswrapper[4814]: I0216 09:52:54.570268 4814 generic.go:334] "Generic (PLEG): container finished" podID="5eb190c6-74c7-4b35-b748-ece1660772f1" containerID="807e77ac70beb78554253aec5f5d1fcc3b2df9b6c41651194e7bd2d83088c2ec" exitCode=0 Feb 16 09:52:54 crc kubenswrapper[4814]: I0216 09:52:54.570360 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxvf2" event={"ID":"5eb190c6-74c7-4b35-b748-ece1660772f1","Type":"ContainerDied","Data":"807e77ac70beb78554253aec5f5d1fcc3b2df9b6c41651194e7bd2d83088c2ec"} Feb 16 09:52:54 crc kubenswrapper[4814]: I0216 09:52:54.572949 4814 generic.go:334] "Generic (PLEG): container finished" podID="690c572b-3be5-4f1d-bb8b-c618d3e9e6d5" containerID="8b7f4e7f684eb3b261712c1a59e7ee32caf901d32b1e2158380380b596638c3b" exitCode=0 Feb 16 09:52:54 crc kubenswrapper[4814]: I0216 09:52:54.573020 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j5d9z" event={"ID":"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5","Type":"ContainerDied","Data":"8b7f4e7f684eb3b261712c1a59e7ee32caf901d32b1e2158380380b596638c3b"} Feb 16 09:52:55 crc kubenswrapper[4814]: I0216 09:52:55.583252 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mxvf2" event={"ID":"5eb190c6-74c7-4b35-b748-ece1660772f1","Type":"ContainerStarted","Data":"77105c86450c9d59e3a2d94c7038bb0db9f8873f7791fd5d31e7bf09b730d3ce"} Feb 16 09:52:55 crc kubenswrapper[4814]: I0216 09:52:55.587236 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j5d9z" event={"ID":"690c572b-3be5-4f1d-bb8b-c618d3e9e6d5","Type":"ContainerStarted","Data":"40b62a63621e16f8ddcd1d4ee2eb091feaf86221cc1e1ec9c0d846f2438113de"} Feb 16 09:52:55 crc kubenswrapper[4814]: I0216 09:52:55.608263 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mxvf2" podStartSLOduration=2.07293272 podStartE2EDuration="4.608238766s" podCreationTimestamp="2026-02-16 09:52:51 +0000 UTC" firstStartedPulling="2026-02-16 09:52:52.521293139 +0000 UTC m=+430.214449319" lastFinishedPulling="2026-02-16 09:52:55.056599195 +0000 UTC m=+432.749755365" observedRunningTime="2026-02-16 09:52:55.604220628 +0000 UTC m=+433.297376808" watchObservedRunningTime="2026-02-16 09:52:55.608238766 +0000 UTC m=+433.301394946" Feb 16 09:52:55 crc kubenswrapper[4814]: I0216 09:52:55.734790 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-pcwpm" Feb 16 09:52:55 crc kubenswrapper[4814]: I0216 09:52:55.759170 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j5d9z" podStartSLOduration=2.269074193 podStartE2EDuration="4.759136191s" podCreationTimestamp="2026-02-16 09:52:51 +0000 UTC" firstStartedPulling="2026-02-16 09:52:52.525236165 +0000 UTC m=+430.218392345" lastFinishedPulling="2026-02-16 09:52:55.015298163 +0000 UTC m=+432.708454343" observedRunningTime="2026-02-16 09:52:55.639780225 +0000 UTC m=+433.332936405" watchObservedRunningTime="2026-02-16 09:52:55.759136191 +0000 UTC m=+433.452292381" Feb 16 09:52:55 crc kubenswrapper[4814]: I0216 09:52:55.798345 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tb5k2"] Feb 16 09:52:59 crc kubenswrapper[4814]: I0216 09:52:59.046112 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:59 crc kubenswrapper[4814]: I0216 09:52:59.046694 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:59 crc kubenswrapper[4814]: I0216 09:52:59.107477 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:52:59 crc kubenswrapper[4814]: I0216 09:52:59.249149 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:59 crc kubenswrapper[4814]: I0216 09:52:59.249364 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:59 crc kubenswrapper[4814]: I0216 09:52:59.283630 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:59 crc kubenswrapper[4814]: I0216 09:52:59.661557 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s6k7j" Feb 16 09:52:59 crc kubenswrapper[4814]: I0216 09:52:59.664758 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-thm8v" Feb 16 09:53:01 crc kubenswrapper[4814]: I0216 09:53:01.474907 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:53:01 crc kubenswrapper[4814]: I0216 09:53:01.475464 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:53:01 crc kubenswrapper[4814]: I0216 09:53:01.522798 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:53:01 crc kubenswrapper[4814]: I0216 09:53:01.662618 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:53:01 crc kubenswrapper[4814]: I0216 09:53:01.662696 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:53:01 crc kubenswrapper[4814]: I0216 09:53:01.670471 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mxvf2" Feb 16 09:53:01 crc kubenswrapper[4814]: I0216 09:53:01.711620 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:53:02 crc kubenswrapper[4814]: I0216 09:53:02.711491 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j5d9z" Feb 16 09:53:20 crc kubenswrapper[4814]: I0216 09:53:20.851097 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" podUID="a02ac473-c7bb-4702-ac42-f0e973d03f05" containerName="registry" containerID="cri-o://a9eb41b6998347f20822c5e1fde8be661124d7c23b7fbe80687f68c76a2edd15" gracePeriod=30 Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.762132 4814 generic.go:334] "Generic (PLEG): container finished" podID="a02ac473-c7bb-4702-ac42-f0e973d03f05" containerID="a9eb41b6998347f20822c5e1fde8be661124d7c23b7fbe80687f68c76a2edd15" exitCode=0 Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.762274 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" event={"ID":"a02ac473-c7bb-4702-ac42-f0e973d03f05","Type":"ContainerDied","Data":"a9eb41b6998347f20822c5e1fde8be661124d7c23b7fbe80687f68c76a2edd15"} Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.762750 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" event={"ID":"a02ac473-c7bb-4702-ac42-f0e973d03f05","Type":"ContainerDied","Data":"9f29ff96a57b476a78ec7126adaef861dea17321dd3fc9bcd3773d995901c3d4"} Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.762772 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f29ff96a57b476a78ec7126adaef861dea17321dd3fc9bcd3773d995901c3d4" Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.789122 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.969435 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-bound-sa-token\") pod \"a02ac473-c7bb-4702-ac42-f0e973d03f05\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.969603 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-trusted-ca\") pod \"a02ac473-c7bb-4702-ac42-f0e973d03f05\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.969731 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a02ac473-c7bb-4702-ac42-f0e973d03f05-installation-pull-secrets\") pod \"a02ac473-c7bb-4702-ac42-f0e973d03f05\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.969841 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a02ac473-c7bb-4702-ac42-f0e973d03f05-ca-trust-extracted\") pod \"a02ac473-c7bb-4702-ac42-f0e973d03f05\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.970097 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a02ac473-c7bb-4702-ac42-f0e973d03f05\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.970152 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzfkg\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-kube-api-access-wzfkg\") pod \"a02ac473-c7bb-4702-ac42-f0e973d03f05\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.970187 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-certificates\") pod \"a02ac473-c7bb-4702-ac42-f0e973d03f05\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.970229 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-tls\") pod \"a02ac473-c7bb-4702-ac42-f0e973d03f05\" (UID: \"a02ac473-c7bb-4702-ac42-f0e973d03f05\") " Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.971051 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a02ac473-c7bb-4702-ac42-f0e973d03f05" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.971937 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a02ac473-c7bb-4702-ac42-f0e973d03f05" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.978315 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a02ac473-c7bb-4702-ac42-f0e973d03f05" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.978368 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02ac473-c7bb-4702-ac42-f0e973d03f05-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a02ac473-c7bb-4702-ac42-f0e973d03f05" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.978896 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a02ac473-c7bb-4702-ac42-f0e973d03f05" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.980056 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-kube-api-access-wzfkg" (OuterVolumeSpecName: "kube-api-access-wzfkg") pod "a02ac473-c7bb-4702-ac42-f0e973d03f05" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05"). InnerVolumeSpecName "kube-api-access-wzfkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.987763 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a02ac473-c7bb-4702-ac42-f0e973d03f05" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 09:53:21 crc kubenswrapper[4814]: I0216 09:53:21.988058 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a02ac473-c7bb-4702-ac42-f0e973d03f05-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a02ac473-c7bb-4702-ac42-f0e973d03f05" (UID: "a02ac473-c7bb-4702-ac42-f0e973d03f05"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:53:22 crc kubenswrapper[4814]: I0216 09:53:22.071677 4814 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a02ac473-c7bb-4702-ac42-f0e973d03f05-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 09:53:22 crc kubenswrapper[4814]: I0216 09:53:22.072129 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzfkg\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-kube-api-access-wzfkg\") on node \"crc\" DevicePath \"\"" Feb 16 09:53:22 crc kubenswrapper[4814]: I0216 09:53:22.072142 4814 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 09:53:22 crc kubenswrapper[4814]: I0216 09:53:22.072153 4814 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 09:53:22 crc kubenswrapper[4814]: I0216 09:53:22.072161 4814 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a02ac473-c7bb-4702-ac42-f0e973d03f05-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 09:53:22 crc kubenswrapper[4814]: I0216 09:53:22.072171 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a02ac473-c7bb-4702-ac42-f0e973d03f05-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 09:53:22 crc kubenswrapper[4814]: I0216 09:53:22.072179 4814 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a02ac473-c7bb-4702-ac42-f0e973d03f05-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 09:53:22 crc kubenswrapper[4814]: I0216 09:53:22.770253 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tb5k2" Feb 16 09:53:22 crc kubenswrapper[4814]: I0216 09:53:22.817374 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tb5k2"] Feb 16 09:53:22 crc kubenswrapper[4814]: I0216 09:53:22.824014 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tb5k2"] Feb 16 09:53:23 crc kubenswrapper[4814]: I0216 09:53:23.005950 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a02ac473-c7bb-4702-ac42-f0e973d03f05" path="/var/lib/kubelet/pods/a02ac473-c7bb-4702-ac42-f0e973d03f05/volumes" Feb 16 09:54:43 crc kubenswrapper[4814]: I0216 09:54:43.349135 4814 scope.go:117] "RemoveContainer" containerID="a9eb41b6998347f20822c5e1fde8be661124d7c23b7fbe80687f68c76a2edd15" Feb 16 09:55:07 crc kubenswrapper[4814]: I0216 09:55:07.960454 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:55:07 crc kubenswrapper[4814]: I0216 09:55:07.961182 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:55:37 crc kubenswrapper[4814]: I0216 09:55:37.960006 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:55:37 crc kubenswrapper[4814]: I0216 09:55:37.960915 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:56:07 crc kubenswrapper[4814]: I0216 09:56:07.959889 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:56:07 crc kubenswrapper[4814]: I0216 09:56:07.960527 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:56:07 crc kubenswrapper[4814]: I0216 09:56:07.960648 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:56:07 crc kubenswrapper[4814]: I0216 09:56:07.961377 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e13b721f455f43965f9fa3ab22df7aa7002ec343bfde28e25849e50a929cccb"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 09:56:07 crc kubenswrapper[4814]: I0216 09:56:07.961473 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://2e13b721f455f43965f9fa3ab22df7aa7002ec343bfde28e25849e50a929cccb" gracePeriod=600 Feb 16 09:56:08 crc kubenswrapper[4814]: I0216 09:56:08.906468 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="2e13b721f455f43965f9fa3ab22df7aa7002ec343bfde28e25849e50a929cccb" exitCode=0 Feb 16 09:56:08 crc kubenswrapper[4814]: I0216 09:56:08.906573 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"2e13b721f455f43965f9fa3ab22df7aa7002ec343bfde28e25849e50a929cccb"} Feb 16 09:56:08 crc kubenswrapper[4814]: I0216 09:56:08.907638 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"0d06be8c91c3c8023e6f3b4f7a6fc189b666a39a9481db1d4140f47bf92416f2"} Feb 16 09:56:08 crc kubenswrapper[4814]: I0216 09:56:08.907693 4814 scope.go:117] "RemoveContainer" containerID="77fad05b79ecca7c319e23468d7a63b9cba584ba0b7e81b7c171315d92fc9506" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.818758 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9w2f7"] Feb 16 09:57:50 crc kubenswrapper[4814]: E0216 09:57:50.820965 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02ac473-c7bb-4702-ac42-f0e973d03f05" containerName="registry" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.820995 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02ac473-c7bb-4702-ac42-f0e973d03f05" containerName="registry" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.822060 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02ac473-c7bb-4702-ac42-f0e973d03f05" containerName="registry" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.823769 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9w2f7" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.847982 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.848871 4814 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jmdbr" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.849148 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.863366 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9w2f7"] Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.870989 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-f8z96"] Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.873237 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-f8z96" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.878059 4814 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xflnk" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.879266 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fxwzc"] Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.880058 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-fxwzc" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.882568 4814 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ghkx9" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.895059 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fxwzc"] Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.910188 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-f8z96"] Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.998074 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k5gd\" (UniqueName: \"kubernetes.io/projected/94e88850-618a-40c9-85a0-6813e57e7715-kube-api-access-2k5gd\") pod \"cert-manager-webhook-687f57d79b-fxwzc\" (UID: \"94e88850-618a-40c9-85a0-6813e57e7715\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fxwzc" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.998162 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5b2t\" (UniqueName: \"kubernetes.io/projected/ca18adc5-a900-4bcd-ad7c-6bcb7d4c2331-kube-api-access-h5b2t\") pod \"cert-manager-858654f9db-f8z96\" (UID: \"ca18adc5-a900-4bcd-ad7c-6bcb7d4c2331\") " pod="cert-manager/cert-manager-858654f9db-f8z96" Feb 16 09:57:50 crc kubenswrapper[4814]: I0216 09:57:50.998319 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fbsd\" (UniqueName: \"kubernetes.io/projected/8ced6e31-bb91-4c18-a157-2daa6ca09a74-kube-api-access-2fbsd\") pod \"cert-manager-cainjector-cf98fcc89-9w2f7\" (UID: \"8ced6e31-bb91-4c18-a157-2daa6ca09a74\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9w2f7" Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.100016 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5b2t\" (UniqueName: \"kubernetes.io/projected/ca18adc5-a900-4bcd-ad7c-6bcb7d4c2331-kube-api-access-h5b2t\") pod \"cert-manager-858654f9db-f8z96\" (UID: \"ca18adc5-a900-4bcd-ad7c-6bcb7d4c2331\") " pod="cert-manager/cert-manager-858654f9db-f8z96" Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.100245 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fbsd\" (UniqueName: \"kubernetes.io/projected/8ced6e31-bb91-4c18-a157-2daa6ca09a74-kube-api-access-2fbsd\") pod \"cert-manager-cainjector-cf98fcc89-9w2f7\" (UID: \"8ced6e31-bb91-4c18-a157-2daa6ca09a74\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9w2f7" Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.100293 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k5gd\" (UniqueName: \"kubernetes.io/projected/94e88850-618a-40c9-85a0-6813e57e7715-kube-api-access-2k5gd\") pod \"cert-manager-webhook-687f57d79b-fxwzc\" (UID: \"94e88850-618a-40c9-85a0-6813e57e7715\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fxwzc" Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.129774 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k5gd\" (UniqueName: \"kubernetes.io/projected/94e88850-618a-40c9-85a0-6813e57e7715-kube-api-access-2k5gd\") pod \"cert-manager-webhook-687f57d79b-fxwzc\" (UID: \"94e88850-618a-40c9-85a0-6813e57e7715\") " pod="cert-manager/cert-manager-webhook-687f57d79b-fxwzc" Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.129883 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5b2t\" (UniqueName: \"kubernetes.io/projected/ca18adc5-a900-4bcd-ad7c-6bcb7d4c2331-kube-api-access-h5b2t\") pod \"cert-manager-858654f9db-f8z96\" (UID: \"ca18adc5-a900-4bcd-ad7c-6bcb7d4c2331\") " pod="cert-manager/cert-manager-858654f9db-f8z96" Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.135394 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fbsd\" (UniqueName: \"kubernetes.io/projected/8ced6e31-bb91-4c18-a157-2daa6ca09a74-kube-api-access-2fbsd\") pod \"cert-manager-cainjector-cf98fcc89-9w2f7\" (UID: \"8ced6e31-bb91-4c18-a157-2daa6ca09a74\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9w2f7" Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.224587 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9w2f7" Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.232851 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-f8z96" Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.245147 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-fxwzc" Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.484850 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-f8z96"] Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.497216 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.519303 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9w2f7"] Feb 16 09:57:51 crc kubenswrapper[4814]: W0216 09:57:51.530178 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ced6e31_bb91_4c18_a157_2daa6ca09a74.slice/crio-240dab1665a9fb758446d98c1eee075a8dc179670f5c766c3afc5ceb65e90f37 WatchSource:0}: Error finding container 240dab1665a9fb758446d98c1eee075a8dc179670f5c766c3afc5ceb65e90f37: Status 404 returned error can't find the container with id 240dab1665a9fb758446d98c1eee075a8dc179670f5c766c3afc5ceb65e90f37 Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.569813 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-fxwzc"] Feb 16 09:57:51 crc kubenswrapper[4814]: W0216 09:57:51.574338 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94e88850_618a_40c9_85a0_6813e57e7715.slice/crio-3de79ad71a3fa6da663f5bf61cea9e5abfd69c78b6c2e8dfba7185823461d3db WatchSource:0}: Error finding container 3de79ad71a3fa6da663f5bf61cea9e5abfd69c78b6c2e8dfba7185823461d3db: Status 404 returned error can't find the container with id 3de79ad71a3fa6da663f5bf61cea9e5abfd69c78b6c2e8dfba7185823461d3db Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.604253 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-f8z96" event={"ID":"ca18adc5-a900-4bcd-ad7c-6bcb7d4c2331","Type":"ContainerStarted","Data":"7acc60a62ae9eb522ad578fd867975db4686206f6b0bcfa25d251a19c99b935b"} Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.605561 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9w2f7" event={"ID":"8ced6e31-bb91-4c18-a157-2daa6ca09a74","Type":"ContainerStarted","Data":"240dab1665a9fb758446d98c1eee075a8dc179670f5c766c3afc5ceb65e90f37"} Feb 16 09:57:51 crc kubenswrapper[4814]: I0216 09:57:51.607131 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-fxwzc" event={"ID":"94e88850-618a-40c9-85a0-6813e57e7715","Type":"ContainerStarted","Data":"3de79ad71a3fa6da663f5bf61cea9e5abfd69c78b6c2e8dfba7185823461d3db"} Feb 16 09:57:54 crc kubenswrapper[4814]: I0216 09:57:54.630818 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-f8z96" event={"ID":"ca18adc5-a900-4bcd-ad7c-6bcb7d4c2331","Type":"ContainerStarted","Data":"85423b2e5d28a60c559b3eccd3e75fdbf5f8f572a6375c78cce1abfc96da0163"} Feb 16 09:57:55 crc kubenswrapper[4814]: I0216 09:57:55.658577 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-f8z96" podStartSLOduration=2.766299525 podStartE2EDuration="5.658548741s" podCreationTimestamp="2026-02-16 09:57:50 +0000 UTC" firstStartedPulling="2026-02-16 09:57:51.496743953 +0000 UTC m=+729.189900133" lastFinishedPulling="2026-02-16 09:57:54.388993169 +0000 UTC m=+732.082149349" observedRunningTime="2026-02-16 09:57:55.656293514 +0000 UTC m=+733.349449714" watchObservedRunningTime="2026-02-16 09:57:55.658548741 +0000 UTC m=+733.351704931" Feb 16 09:57:56 crc kubenswrapper[4814]: I0216 09:57:56.648348 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9w2f7" event={"ID":"8ced6e31-bb91-4c18-a157-2daa6ca09a74","Type":"ContainerStarted","Data":"aea3c82093e3494a4170961f6d218f561f295edade6ba7e0b0a2c6b39d191dd2"} Feb 16 09:57:56 crc kubenswrapper[4814]: I0216 09:57:56.650928 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-fxwzc" event={"ID":"94e88850-618a-40c9-85a0-6813e57e7715","Type":"ContainerStarted","Data":"d592b72bba2c514ca6958e24ae3d1b9982daa9abd0f793f5802665dd9d052863"} Feb 16 09:57:56 crc kubenswrapper[4814]: I0216 09:57:56.651067 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-fxwzc" Feb 16 09:57:56 crc kubenswrapper[4814]: I0216 09:57:56.667339 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9w2f7" podStartSLOduration=2.079529575 podStartE2EDuration="6.667309453s" podCreationTimestamp="2026-02-16 09:57:50 +0000 UTC" firstStartedPulling="2026-02-16 09:57:51.533200317 +0000 UTC m=+729.226356497" lastFinishedPulling="2026-02-16 09:57:56.120980175 +0000 UTC m=+733.814136375" observedRunningTime="2026-02-16 09:57:56.666078816 +0000 UTC m=+734.359234996" watchObservedRunningTime="2026-02-16 09:57:56.667309453 +0000 UTC m=+734.360465653" Feb 16 09:57:56 crc kubenswrapper[4814]: I0216 09:57:56.691607 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-fxwzc" podStartSLOduration=2.090241082 podStartE2EDuration="6.691578659s" podCreationTimestamp="2026-02-16 09:57:50 +0000 UTC" firstStartedPulling="2026-02-16 09:57:51.577641238 +0000 UTC m=+729.270797418" lastFinishedPulling="2026-02-16 09:57:56.178978815 +0000 UTC m=+733.872134995" observedRunningTime="2026-02-16 09:57:56.689655022 +0000 UTC m=+734.382811252" watchObservedRunningTime="2026-02-16 09:57:56.691578659 +0000 UTC m=+734.384734839" Feb 16 09:58:00 crc kubenswrapper[4814]: I0216 09:58:00.886071 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ghlbk"] Feb 16 09:58:00 crc kubenswrapper[4814]: I0216 09:58:00.887429 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovn-controller" containerID="cri-o://8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3" gracePeriod=30 Feb 16 09:58:00 crc kubenswrapper[4814]: I0216 09:58:00.887443 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="nbdb" containerID="cri-o://f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26" gracePeriod=30 Feb 16 09:58:00 crc kubenswrapper[4814]: I0216 09:58:00.887602 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="sbdb" containerID="cri-o://3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf" gracePeriod=30 Feb 16 09:58:00 crc kubenswrapper[4814]: I0216 09:58:00.887643 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovn-acl-logging" containerID="cri-o://27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2" gracePeriod=30 Feb 16 09:58:00 crc kubenswrapper[4814]: I0216 09:58:00.887617 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="kube-rbac-proxy-node" containerID="cri-o://e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c" gracePeriod=30 Feb 16 09:58:00 crc kubenswrapper[4814]: I0216 09:58:00.887639 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38" gracePeriod=30 Feb 16 09:58:00 crc kubenswrapper[4814]: I0216 09:58:00.887683 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="northd" containerID="cri-o://0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692" gracePeriod=30 Feb 16 09:58:00 crc kubenswrapper[4814]: I0216 09:58:00.983005 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" containerID="cri-o://652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e" gracePeriod=30 Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.249089 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-fxwzc" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.277756 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e is running failed: container process not found" containerID="652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.278228 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e is running failed: container process not found" containerID="652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.278523 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e is running failed: container process not found" containerID="652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.278582 4814 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.689033 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovnkube-controller/3.log" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.692742 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovn-acl-logging/0.log" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693332 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovn-controller/0.log" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693748 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e" exitCode=0 Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693776 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf" exitCode=0 Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693784 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26" exitCode=0 Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693791 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692" exitCode=0 Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693798 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38" exitCode=0 Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693806 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c" exitCode=0 Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693812 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2" exitCode=143 Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693820 4814 generic.go:334] "Generic (PLEG): container finished" podID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerID="8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3" exitCode=143 Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693846 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e"} Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693908 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf"} Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693931 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26"} Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693949 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692"} Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693959 4814 scope.go:117] "RemoveContainer" containerID="7c1bb6167bf69136aa3494e5f9243df1df1d5cf9c4c30d57012d054191de19b2" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693964 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38"} Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.693985 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c"} Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.694001 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2"} Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.694017 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3"} Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.696205 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwtrg_419c1fde-3a56-45c4-b6aa-5c5b8cde8db6/kube-multus/2.log" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.696820 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwtrg_419c1fde-3a56-45c4-b6aa-5c5b8cde8db6/kube-multus/1.log" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.696871 4814 generic.go:334] "Generic (PLEG): container finished" podID="419c1fde-3a56-45c4-b6aa-5c5b8cde8db6" containerID="ee393866ad3987bd8516a16241c7e7c3516784ac2be70efcdd49929dfcad36fd" exitCode=2 Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.696912 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwtrg" event={"ID":"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6","Type":"ContainerDied","Data":"ee393866ad3987bd8516a16241c7e7c3516784ac2be70efcdd49929dfcad36fd"} Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.697590 4814 scope.go:117] "RemoveContainer" containerID="ee393866ad3987bd8516a16241c7e7c3516784ac2be70efcdd49929dfcad36fd" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.752623 4814 scope.go:117] "RemoveContainer" containerID="cd66ce06b5cb7823b8b82804442d52812927f376557e904b28559c4cd26d7630" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.879314 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovn-acl-logging/0.log" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.880024 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovn-controller/0.log" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.880712 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.937807 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nj8mm"] Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938200 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovn-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938224 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovn-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938241 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="sbdb" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938252 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="sbdb" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938272 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="kube-rbac-proxy-node" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938283 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="kube-rbac-proxy-node" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938297 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938307 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938318 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938328 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938343 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="kubecfg-setup" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938353 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="kubecfg-setup" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938369 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="nbdb" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938376 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="nbdb" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938387 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="northd" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938395 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="northd" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938407 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovn-acl-logging" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938415 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovn-acl-logging" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938426 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938435 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938443 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938451 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.938464 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938474 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938817 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="kube-rbac-proxy-node" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938844 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovn-acl-logging" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938857 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938868 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="sbdb" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938883 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="nbdb" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938898 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938912 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938925 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938938 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="northd" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938954 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.938969 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovn-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: E0216 09:58:01.939107 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.939121 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.939266 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" containerName="ovnkube-controller" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.947902 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.970984 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-systemd-units\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971064 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-openvswitch\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971101 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtjxs\" (UniqueName: \"kubernetes.io/projected/53ed6503-5c40-4a82-985c-dc46bc5daaed-kube-api-access-mtjxs\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971123 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-netd\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971141 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-etc-openvswitch\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971202 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-log-socket\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971225 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovn-node-metrics-cert\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971251 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-config\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971288 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971309 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971373 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-log-socket" (OuterVolumeSpecName: "log-socket") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971310 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-systemd\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971406 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971488 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-var-lib-openvswitch\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971549 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971579 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-kubelet\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971644 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-env-overrides\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971679 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-ovn-kubernetes\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971725 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-script-lib\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971801 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-ovn\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971851 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-slash\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971874 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-node-log\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971896 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-netns\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.971923 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-bin\") pod \"53ed6503-5c40-4a82-985c-dc46bc5daaed\" (UID: \"53ed6503-5c40-4a82-985c-dc46bc5daaed\") " Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.972230 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.972351 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.972401 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.972415 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-slash" (OuterVolumeSpecName: "host-slash") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.972438 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.972444 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-node-log" (OuterVolumeSpecName: "node-log") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.972488 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.972489 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.972522 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.972939 4814 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973021 4814 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973081 4814 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973140 4814 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973192 4814 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.972993 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973022 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973105 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973211 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973439 4814 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973545 4814 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973636 4814 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973746 4814 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973840 4814 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973919 4814 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.973989 4814 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.974066 4814 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.983570 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.988109 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ed6503-5c40-4a82-985c-dc46bc5daaed-kube-api-access-mtjxs" (OuterVolumeSpecName: "kube-api-access-mtjxs") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "kube-api-access-mtjxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:58:01 crc kubenswrapper[4814]: I0216 09:58:01.994427 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "53ed6503-5c40-4a82-985c-dc46bc5daaed" (UID: "53ed6503-5c40-4a82-985c-dc46bc5daaed"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.074953 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-run-netns\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.075018 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b93c428-5004-472a-a4e1-aadeffa9b3d0-ovnkube-script-lib\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.075076 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-slash\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.075101 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-run-openvswitch\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.075122 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b93c428-5004-472a-a4e1-aadeffa9b3d0-ovnkube-config\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.075140 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-cni-bin\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.075161 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b93c428-5004-472a-a4e1-aadeffa9b3d0-ovn-node-metrics-cert\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.075181 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-systemd-units\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.075200 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-var-lib-openvswitch\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.075398 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-node-log\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.075757 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b93c428-5004-472a-a4e1-aadeffa9b3d0-env-overrides\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076038 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-run-ovn\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076123 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076152 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-cni-netd\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076180 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7j2f\" (UniqueName: \"kubernetes.io/projected/6b93c428-5004-472a-a4e1-aadeffa9b3d0-kube-api-access-n7j2f\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076228 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-etc-openvswitch\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076307 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076339 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-kubelet\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076424 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-run-systemd\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076450 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-log-socket\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076545 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtjxs\" (UniqueName: \"kubernetes.io/projected/53ed6503-5c40-4a82-985c-dc46bc5daaed-kube-api-access-mtjxs\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076567 4814 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076579 4814 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076589 4814 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076600 4814 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076608 4814 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/53ed6503-5c40-4a82-985c-dc46bc5daaed-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.076619 4814 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/53ed6503-5c40-4a82-985c-dc46bc5daaed-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177380 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-log-socket\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177444 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-run-systemd\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177491 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-run-netns\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177518 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b93c428-5004-472a-a4e1-aadeffa9b3d0-ovnkube-script-lib\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177562 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-slash\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177581 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-run-openvswitch\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177614 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b93c428-5004-472a-a4e1-aadeffa9b3d0-ovnkube-config\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177630 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-cni-bin\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177654 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b93c428-5004-472a-a4e1-aadeffa9b3d0-ovn-node-metrics-cert\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177671 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-systemd-units\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177690 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-var-lib-openvswitch\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177710 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-node-log\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177727 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b93c428-5004-472a-a4e1-aadeffa9b3d0-env-overrides\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177780 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-run-ovn\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177796 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177829 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-cni-netd\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177850 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7j2f\" (UniqueName: \"kubernetes.io/projected/6b93c428-5004-472a-a4e1-aadeffa9b3d0-kube-api-access-n7j2f\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177877 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-etc-openvswitch\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177894 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.177911 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-kubelet\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.178019 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-kubelet\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.178067 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-log-socket\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.178088 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-run-systemd\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.179261 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-run-netns\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.180132 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b93c428-5004-472a-a4e1-aadeffa9b3d0-ovnkube-script-lib\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.180494 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-slash\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.180527 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-run-openvswitch\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.181094 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b93c428-5004-472a-a4e1-aadeffa9b3d0-ovnkube-config\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.181141 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-cni-bin\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.184168 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b93c428-5004-472a-a4e1-aadeffa9b3d0-ovn-node-metrics-cert\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.184230 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-systemd-units\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.184267 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-var-lib-openvswitch\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.184300 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-node-log\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.184840 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b93c428-5004-472a-a4e1-aadeffa9b3d0-env-overrides\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.185447 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-run-ovn\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.185476 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.185719 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-etc-openvswitch\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.185758 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.185791 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b93c428-5004-472a-a4e1-aadeffa9b3d0-host-cni-netd\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.204471 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7j2f\" (UniqueName: \"kubernetes.io/projected/6b93c428-5004-472a-a4e1-aadeffa9b3d0-kube-api-access-n7j2f\") pod \"ovnkube-node-nj8mm\" (UID: \"6b93c428-5004-472a-a4e1-aadeffa9b3d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.270929 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.709403 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwtrg_419c1fde-3a56-45c4-b6aa-5c5b8cde8db6/kube-multus/2.log" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.709577 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwtrg" event={"ID":"419c1fde-3a56-45c4-b6aa-5c5b8cde8db6","Type":"ContainerStarted","Data":"445313da5a591d2e070a3a4c5e63573530b0240df34b17b23add8567e071ef7b"} Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.716208 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovn-acl-logging/0.log" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.717199 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-ghlbk_53ed6503-5c40-4a82-985c-dc46bc5daaed/ovn-controller/0.log" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.717890 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" event={"ID":"53ed6503-5c40-4a82-985c-dc46bc5daaed","Type":"ContainerDied","Data":"14a9006b5b46a579222853a23cc353fcf3bd97adbe0e982fdf70e74019038ac9"} Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.717994 4814 scope.go:117] "RemoveContainer" containerID="652e392eeca8b506175b52b2d89506ef04625c241d7bcf6d9a28dd9386b6640e" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.718045 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ghlbk" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.720371 4814 generic.go:334] "Generic (PLEG): container finished" podID="6b93c428-5004-472a-a4e1-aadeffa9b3d0" containerID="b26fde32347264ad7826d56adda111b700b3840771e80690051aabb7036c90dd" exitCode=0 Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.720444 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" event={"ID":"6b93c428-5004-472a-a4e1-aadeffa9b3d0","Type":"ContainerDied","Data":"b26fde32347264ad7826d56adda111b700b3840771e80690051aabb7036c90dd"} Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.720510 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" event={"ID":"6b93c428-5004-472a-a4e1-aadeffa9b3d0","Type":"ContainerStarted","Data":"b0fa8900b961dc559e0c4b817ef04a42c320f47eaf2e3a5db71a4fc4e0ad1926"} Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.769273 4814 scope.go:117] "RemoveContainer" containerID="3411836fbe729c9eb5b61153f895cb3a287a50f3bb1e3c5939ea9d54a82ec7bf" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.796883 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ghlbk"] Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.797049 4814 scope.go:117] "RemoveContainer" containerID="f323d5ec454c8a576c51680efef76bed6845a152bed480a2ffecb5cfabc2df26" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.799326 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ghlbk"] Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.820462 4814 scope.go:117] "RemoveContainer" containerID="0818bccdbb66721c56cdebf02ccbe7d9d3881a356285046de7cd4b822c6da692" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.834020 4814 scope.go:117] "RemoveContainer" containerID="5be417dd342024049035aae44258936cb0d4cee48492c6de5d5d60122b4dac38" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.849710 4814 scope.go:117] "RemoveContainer" containerID="e0b108cb433e1b4de1140b6877164e82292b482e818c3a32c331f2790407499c" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.864585 4814 scope.go:117] "RemoveContainer" containerID="27d875fd38b378df5986b731f3b49e725cee093678169eefc4e6fb379e1ba8e2" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.882782 4814 scope.go:117] "RemoveContainer" containerID="8ce0ebbfd24c88b65cee10a6bb9a47ff914e90fe116dbd2b376ef7d982c54ab3" Feb 16 09:58:02 crc kubenswrapper[4814]: I0216 09:58:02.899594 4814 scope.go:117] "RemoveContainer" containerID="2d55ece75b33b71225c43060b3031a1ed08843e299e10c185868dd9887315c9c" Feb 16 09:58:03 crc kubenswrapper[4814]: I0216 09:58:03.003039 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ed6503-5c40-4a82-985c-dc46bc5daaed" path="/var/lib/kubelet/pods/53ed6503-5c40-4a82-985c-dc46bc5daaed/volumes" Feb 16 09:58:03 crc kubenswrapper[4814]: I0216 09:58:03.737606 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" event={"ID":"6b93c428-5004-472a-a4e1-aadeffa9b3d0","Type":"ContainerStarted","Data":"6ede7919dcd2e3568f9f9ed3b46a5a09adf3dc03f0d06da0b8f9eda80a4606dd"} Feb 16 09:58:03 crc kubenswrapper[4814]: I0216 09:58:03.737682 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" event={"ID":"6b93c428-5004-472a-a4e1-aadeffa9b3d0","Type":"ContainerStarted","Data":"4c5391ac0fe19b7be163d3e5b11c37590f75063d2080c27881a57636e4c074cc"} Feb 16 09:58:04 crc kubenswrapper[4814]: I0216 09:58:04.752400 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" event={"ID":"6b93c428-5004-472a-a4e1-aadeffa9b3d0","Type":"ContainerStarted","Data":"ede1ba560e136202f742af3092a36b87a17292a5b880cd2a5f200a4543377993"} Feb 16 09:58:04 crc kubenswrapper[4814]: I0216 09:58:04.752779 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" event={"ID":"6b93c428-5004-472a-a4e1-aadeffa9b3d0","Type":"ContainerStarted","Data":"1c2236d23256dfebc5ab4f20a6cb95fca9fa564eb61e9fd8cdeff7e40ac66ceb"} Feb 16 09:58:05 crc kubenswrapper[4814]: I0216 09:58:05.764266 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" event={"ID":"6b93c428-5004-472a-a4e1-aadeffa9b3d0","Type":"ContainerStarted","Data":"7a48f4565058465ad7b4cbc2506a7b526bd3c52c7b923588ba1f058902c2ea65"} Feb 16 09:58:06 crc kubenswrapper[4814]: I0216 09:58:06.785070 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" event={"ID":"6b93c428-5004-472a-a4e1-aadeffa9b3d0","Type":"ContainerStarted","Data":"72a0352508e90c87e77658e27c298ab56eda1902894059351df1870300699060"} Feb 16 09:58:08 crc kubenswrapper[4814]: I0216 09:58:08.803037 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" event={"ID":"6b93c428-5004-472a-a4e1-aadeffa9b3d0","Type":"ContainerStarted","Data":"63a1336f755f8754086afdf75166c0f2dba4a429c61dea7986f320f630567730"} Feb 16 09:58:09 crc kubenswrapper[4814]: I0216 09:58:09.813171 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" event={"ID":"6b93c428-5004-472a-a4e1-aadeffa9b3d0","Type":"ContainerStarted","Data":"7dcddd558008b78b8ec13a344ca2e2f9cf644c70c26e74e904915125d73699bc"} Feb 16 09:58:09 crc kubenswrapper[4814]: I0216 09:58:09.814100 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:09 crc kubenswrapper[4814]: I0216 09:58:09.814235 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:09 crc kubenswrapper[4814]: I0216 09:58:09.814354 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:09 crc kubenswrapper[4814]: I0216 09:58:09.846485 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:09 crc kubenswrapper[4814]: I0216 09:58:09.847320 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:09 crc kubenswrapper[4814]: I0216 09:58:09.882175 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" podStartSLOduration=8.882148492 podStartE2EDuration="8.882148492s" podCreationTimestamp="2026-02-16 09:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:58:09.847623364 +0000 UTC m=+747.540779574" watchObservedRunningTime="2026-02-16 09:58:09.882148492 +0000 UTC m=+747.575304672" Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.405445 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x"] Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.407857 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.410451 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.420262 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x"] Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.610488 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fz96\" (UniqueName: \"kubernetes.io/projected/7e703789-c69e-4376-a513-cd7b042c66b4-kube-api-access-5fz96\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.610575 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.610706 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.711897 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.712019 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fz96\" (UniqueName: \"kubernetes.io/projected/7e703789-c69e-4376-a513-cd7b042c66b4-kube-api-access-5fz96\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.712053 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.712468 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.712574 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:27 crc kubenswrapper[4814]: I0216 09:58:27.738684 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fz96\" (UniqueName: \"kubernetes.io/projected/7e703789-c69e-4376-a513-cd7b042c66b4-kube-api-access-5fz96\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:28 crc kubenswrapper[4814]: I0216 09:58:28.032669 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:28 crc kubenswrapper[4814]: I0216 09:58:28.296063 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x"] Feb 16 09:58:28 crc kubenswrapper[4814]: W0216 09:58:28.306951 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e703789_c69e_4376_a513_cd7b042c66b4.slice/crio-282f5beefcb132d05e084286287bf2280c24a63af79cddc99a634896708c8489 WatchSource:0}: Error finding container 282f5beefcb132d05e084286287bf2280c24a63af79cddc99a634896708c8489: Status 404 returned error can't find the container with id 282f5beefcb132d05e084286287bf2280c24a63af79cddc99a634896708c8489 Feb 16 09:58:28 crc kubenswrapper[4814]: I0216 09:58:28.792253 4814 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 09:58:28 crc kubenswrapper[4814]: I0216 09:58:28.952235 4814 generic.go:334] "Generic (PLEG): container finished" podID="7e703789-c69e-4376-a513-cd7b042c66b4" containerID="247dde9df8ffb9caff2828fd9218805a7d97bef056f02614da6ead533896775b" exitCode=0 Feb 16 09:58:28 crc kubenswrapper[4814]: I0216 09:58:28.952442 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" event={"ID":"7e703789-c69e-4376-a513-cd7b042c66b4","Type":"ContainerDied","Data":"247dde9df8ffb9caff2828fd9218805a7d97bef056f02614da6ead533896775b"} Feb 16 09:58:28 crc kubenswrapper[4814]: I0216 09:58:28.952835 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" event={"ID":"7e703789-c69e-4376-a513-cd7b042c66b4","Type":"ContainerStarted","Data":"282f5beefcb132d05e084286287bf2280c24a63af79cddc99a634896708c8489"} Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.732509 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-55cfw"] Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.734283 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.755521 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55cfw"] Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.846749 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-catalog-content\") pod \"redhat-operators-55cfw\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.846837 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzrkv\" (UniqueName: \"kubernetes.io/projected/14d96a0b-6376-4cfe-82cb-065db585c253-kube-api-access-tzrkv\") pod \"redhat-operators-55cfw\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.846887 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-utilities\") pod \"redhat-operators-55cfw\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.948201 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-catalog-content\") pod \"redhat-operators-55cfw\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.948253 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzrkv\" (UniqueName: \"kubernetes.io/projected/14d96a0b-6376-4cfe-82cb-065db585c253-kube-api-access-tzrkv\") pod \"redhat-operators-55cfw\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.948273 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-utilities\") pod \"redhat-operators-55cfw\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.948865 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-catalog-content\") pod \"redhat-operators-55cfw\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.948938 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-utilities\") pod \"redhat-operators-55cfw\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:29 crc kubenswrapper[4814]: I0216 09:58:29.974193 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzrkv\" (UniqueName: \"kubernetes.io/projected/14d96a0b-6376-4cfe-82cb-065db585c253-kube-api-access-tzrkv\") pod \"redhat-operators-55cfw\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:30 crc kubenswrapper[4814]: I0216 09:58:30.077318 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:30 crc kubenswrapper[4814]: I0216 09:58:30.396560 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55cfw"] Feb 16 09:58:30 crc kubenswrapper[4814]: I0216 09:58:30.976257 4814 generic.go:334] "Generic (PLEG): container finished" podID="7e703789-c69e-4376-a513-cd7b042c66b4" containerID="b06457b797a1c901c68493e0d90ca677e18ed052bb6dbb69b1647952b2ad43f6" exitCode=0 Feb 16 09:58:30 crc kubenswrapper[4814]: I0216 09:58:30.976290 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" event={"ID":"7e703789-c69e-4376-a513-cd7b042c66b4","Type":"ContainerDied","Data":"b06457b797a1c901c68493e0d90ca677e18ed052bb6dbb69b1647952b2ad43f6"} Feb 16 09:58:30 crc kubenswrapper[4814]: I0216 09:58:30.978860 4814 generic.go:334] "Generic (PLEG): container finished" podID="14d96a0b-6376-4cfe-82cb-065db585c253" containerID="6f7670f07aa03107c78bd10b92498e34212af5ce633f58826d8a577472e64e6f" exitCode=0 Feb 16 09:58:30 crc kubenswrapper[4814]: I0216 09:58:30.978916 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55cfw" event={"ID":"14d96a0b-6376-4cfe-82cb-065db585c253","Type":"ContainerDied","Data":"6f7670f07aa03107c78bd10b92498e34212af5ce633f58826d8a577472e64e6f"} Feb 16 09:58:30 crc kubenswrapper[4814]: I0216 09:58:30.978954 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55cfw" event={"ID":"14d96a0b-6376-4cfe-82cb-065db585c253","Type":"ContainerStarted","Data":"6b5c6dcb08422157bbe933713d8cd8c76cc74cffb190ada43de513fcac6525be"} Feb 16 09:58:31 crc kubenswrapper[4814]: I0216 09:58:31.987466 4814 generic.go:334] "Generic (PLEG): container finished" podID="7e703789-c69e-4376-a513-cd7b042c66b4" containerID="12848b56d9372751a5fa6ae269495c2b68c23a3b08745932e37005b35498c09a" exitCode=0 Feb 16 09:58:31 crc kubenswrapper[4814]: I0216 09:58:31.987598 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" event={"ID":"7e703789-c69e-4376-a513-cd7b042c66b4","Type":"ContainerDied","Data":"12848b56d9372751a5fa6ae269495c2b68c23a3b08745932e37005b35498c09a"} Feb 16 09:58:31 crc kubenswrapper[4814]: I0216 09:58:31.990119 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55cfw" event={"ID":"14d96a0b-6376-4cfe-82cb-065db585c253","Type":"ContainerStarted","Data":"61d49ddd62de24877bcd01294ecedc801e41fa591400bac0ad660d9db806288e"} Feb 16 09:58:32 crc kubenswrapper[4814]: I0216 09:58:32.319503 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nj8mm" Feb 16 09:58:33 crc kubenswrapper[4814]: I0216 09:58:33.322150 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:33 crc kubenswrapper[4814]: I0216 09:58:33.413568 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fz96\" (UniqueName: \"kubernetes.io/projected/7e703789-c69e-4376-a513-cd7b042c66b4-kube-api-access-5fz96\") pod \"7e703789-c69e-4376-a513-cd7b042c66b4\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " Feb 16 09:58:33 crc kubenswrapper[4814]: I0216 09:58:33.413890 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-bundle\") pod \"7e703789-c69e-4376-a513-cd7b042c66b4\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " Feb 16 09:58:33 crc kubenswrapper[4814]: I0216 09:58:33.414009 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-util\") pod \"7e703789-c69e-4376-a513-cd7b042c66b4\" (UID: \"7e703789-c69e-4376-a513-cd7b042c66b4\") " Feb 16 09:58:33 crc kubenswrapper[4814]: I0216 09:58:33.416511 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-bundle" (OuterVolumeSpecName: "bundle") pod "7e703789-c69e-4376-a513-cd7b042c66b4" (UID: "7e703789-c69e-4376-a513-cd7b042c66b4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:58:33 crc kubenswrapper[4814]: I0216 09:58:33.420679 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e703789-c69e-4376-a513-cd7b042c66b4-kube-api-access-5fz96" (OuterVolumeSpecName: "kube-api-access-5fz96") pod "7e703789-c69e-4376-a513-cd7b042c66b4" (UID: "7e703789-c69e-4376-a513-cd7b042c66b4"). InnerVolumeSpecName "kube-api-access-5fz96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:58:33 crc kubenswrapper[4814]: I0216 09:58:33.435519 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-util" (OuterVolumeSpecName: "util") pod "7e703789-c69e-4376-a513-cd7b042c66b4" (UID: "7e703789-c69e-4376-a513-cd7b042c66b4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:58:33 crc kubenswrapper[4814]: I0216 09:58:33.514971 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fz96\" (UniqueName: \"kubernetes.io/projected/7e703789-c69e-4376-a513-cd7b042c66b4-kube-api-access-5fz96\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:33 crc kubenswrapper[4814]: I0216 09:58:33.515029 4814 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:33 crc kubenswrapper[4814]: I0216 09:58:33.515042 4814 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e703789-c69e-4376-a513-cd7b042c66b4-util\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:34 crc kubenswrapper[4814]: I0216 09:58:34.008418 4814 generic.go:334] "Generic (PLEG): container finished" podID="14d96a0b-6376-4cfe-82cb-065db585c253" containerID="61d49ddd62de24877bcd01294ecedc801e41fa591400bac0ad660d9db806288e" exitCode=0 Feb 16 09:58:34 crc kubenswrapper[4814]: I0216 09:58:34.008548 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55cfw" event={"ID":"14d96a0b-6376-4cfe-82cb-065db585c253","Type":"ContainerDied","Data":"61d49ddd62de24877bcd01294ecedc801e41fa591400bac0ad660d9db806288e"} Feb 16 09:58:34 crc kubenswrapper[4814]: I0216 09:58:34.013218 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" event={"ID":"7e703789-c69e-4376-a513-cd7b042c66b4","Type":"ContainerDied","Data":"282f5beefcb132d05e084286287bf2280c24a63af79cddc99a634896708c8489"} Feb 16 09:58:34 crc kubenswrapper[4814]: I0216 09:58:34.014675 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="282f5beefcb132d05e084286287bf2280c24a63af79cddc99a634896708c8489" Feb 16 09:58:34 crc kubenswrapper[4814]: I0216 09:58:34.014780 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x" Feb 16 09:58:35 crc kubenswrapper[4814]: I0216 09:58:35.023273 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55cfw" event={"ID":"14d96a0b-6376-4cfe-82cb-065db585c253","Type":"ContainerStarted","Data":"d617a9d4e7f8db255884722610f82692b62531a074b3b1f2f31e4f7fee162c41"} Feb 16 09:58:35 crc kubenswrapper[4814]: I0216 09:58:35.056978 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-55cfw" podStartSLOduration=2.6021091370000002 podStartE2EDuration="6.056952121s" podCreationTimestamp="2026-02-16 09:58:29 +0000 UTC" firstStartedPulling="2026-02-16 09:58:30.980315174 +0000 UTC m=+768.673471354" lastFinishedPulling="2026-02-16 09:58:34.435158158 +0000 UTC m=+772.128314338" observedRunningTime="2026-02-16 09:58:35.051209062 +0000 UTC m=+772.744365282" watchObservedRunningTime="2026-02-16 09:58:35.056952121 +0000 UTC m=+772.750108341" Feb 16 09:58:37 crc kubenswrapper[4814]: I0216 09:58:37.960248 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:58:37 crc kubenswrapper[4814]: I0216 09:58:37.960801 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.126467 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cx84x"] Feb 16 09:58:38 crc kubenswrapper[4814]: E0216 09:58:38.126775 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e703789-c69e-4376-a513-cd7b042c66b4" containerName="extract" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.126795 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e703789-c69e-4376-a513-cd7b042c66b4" containerName="extract" Feb 16 09:58:38 crc kubenswrapper[4814]: E0216 09:58:38.126811 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e703789-c69e-4376-a513-cd7b042c66b4" containerName="util" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.126820 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e703789-c69e-4376-a513-cd7b042c66b4" containerName="util" Feb 16 09:58:38 crc kubenswrapper[4814]: E0216 09:58:38.126839 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e703789-c69e-4376-a513-cd7b042c66b4" containerName="pull" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.126846 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e703789-c69e-4376-a513-cd7b042c66b4" containerName="pull" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.126975 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e703789-c69e-4376-a513-cd7b042c66b4" containerName="extract" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.127930 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.140645 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cx84x"] Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.186064 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-catalog-content\") pod \"certified-operators-cx84x\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.186164 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-utilities\") pod \"certified-operators-cx84x\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.186423 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8qmb\" (UniqueName: \"kubernetes.io/projected/8a24f28a-0527-41b0-9671-05c696826dc2-kube-api-access-k8qmb\") pod \"certified-operators-cx84x\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.288179 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-utilities\") pod \"certified-operators-cx84x\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.288270 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8qmb\" (UniqueName: \"kubernetes.io/projected/8a24f28a-0527-41b0-9671-05c696826dc2-kube-api-access-k8qmb\") pod \"certified-operators-cx84x\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.288305 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-catalog-content\") pod \"certified-operators-cx84x\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.288931 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-catalog-content\") pod \"certified-operators-cx84x\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.289096 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-utilities\") pod \"certified-operators-cx84x\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.317370 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8qmb\" (UniqueName: \"kubernetes.io/projected/8a24f28a-0527-41b0-9671-05c696826dc2-kube-api-access-k8qmb\") pod \"certified-operators-cx84x\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.453760 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:38 crc kubenswrapper[4814]: I0216 09:58:38.760474 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cx84x"] Feb 16 09:58:39 crc kubenswrapper[4814]: I0216 09:58:39.069009 4814 generic.go:334] "Generic (PLEG): container finished" podID="8a24f28a-0527-41b0-9671-05c696826dc2" containerID="3e93f6b418eca0cd3420f9ce1b30508a60727b5299e53638d97631ae8c613dc6" exitCode=0 Feb 16 09:58:39 crc kubenswrapper[4814]: I0216 09:58:39.069068 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx84x" event={"ID":"8a24f28a-0527-41b0-9671-05c696826dc2","Type":"ContainerDied","Data":"3e93f6b418eca0cd3420f9ce1b30508a60727b5299e53638d97631ae8c613dc6"} Feb 16 09:58:39 crc kubenswrapper[4814]: I0216 09:58:39.069100 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx84x" event={"ID":"8a24f28a-0527-41b0-9671-05c696826dc2","Type":"ContainerStarted","Data":"dafcdea1379ff704eb5bae23c9edcf3534fcfd2767eed21d3074ec52e7894d80"} Feb 16 09:58:40 crc kubenswrapper[4814]: I0216 09:58:40.077500 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:40 crc kubenswrapper[4814]: I0216 09:58:40.077884 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:41 crc kubenswrapper[4814]: I0216 09:58:41.083022 4814 generic.go:334] "Generic (PLEG): container finished" podID="8a24f28a-0527-41b0-9671-05c696826dc2" containerID="253ea39f6f7daea4cf0b4ae6feaf473b9e4c03fef84eb3ea383916096029bdac" exitCode=0 Feb 16 09:58:41 crc kubenswrapper[4814]: I0216 09:58:41.084445 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx84x" event={"ID":"8a24f28a-0527-41b0-9671-05c696826dc2","Type":"ContainerDied","Data":"253ea39f6f7daea4cf0b4ae6feaf473b9e4c03fef84eb3ea383916096029bdac"} Feb 16 09:58:41 crc kubenswrapper[4814]: I0216 09:58:41.164695 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-55cfw" podUID="14d96a0b-6376-4cfe-82cb-065db585c253" containerName="registry-server" probeResult="failure" output=< Feb 16 09:58:41 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 09:58:41 crc kubenswrapper[4814]: > Feb 16 09:58:42 crc kubenswrapper[4814]: I0216 09:58:42.091060 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx84x" event={"ID":"8a24f28a-0527-41b0-9671-05c696826dc2","Type":"ContainerStarted","Data":"0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1"} Feb 16 09:58:42 crc kubenswrapper[4814]: I0216 09:58:42.121379 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cx84x" podStartSLOduration=1.661646165 podStartE2EDuration="4.121356098s" podCreationTimestamp="2026-02-16 09:58:38 +0000 UTC" firstStartedPulling="2026-02-16 09:58:39.075779462 +0000 UTC m=+776.768935632" lastFinishedPulling="2026-02-16 09:58:41.535489385 +0000 UTC m=+779.228645565" observedRunningTime="2026-02-16 09:58:42.116526677 +0000 UTC m=+779.809682857" watchObservedRunningTime="2026-02-16 09:58:42.121356098 +0000 UTC m=+779.814512268" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.362105 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-85xhn"] Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.363062 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-85xhn" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.369148 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-8d6j6" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.369324 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.370859 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.377663 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-85xhn"] Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.415330 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7cm9\" (UniqueName: \"kubernetes.io/projected/7b1e81f6-bcc5-439b-845d-d7f11f18a3ca-kube-api-access-r7cm9\") pod \"obo-prometheus-operator-68bc856cb9-85xhn\" (UID: \"7b1e81f6-bcc5-439b-845d-d7f11f18a3ca\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-85xhn" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.488412 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8"] Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.489341 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.491031 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-b6lsh" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.500164 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.517512 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7cm9\" (UniqueName: \"kubernetes.io/projected/7b1e81f6-bcc5-439b-845d-d7f11f18a3ca-kube-api-access-r7cm9\") pod \"obo-prometheus-operator-68bc856cb9-85xhn\" (UID: \"7b1e81f6-bcc5-439b-845d-d7f11f18a3ca\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-85xhn" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.517658 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44a93ef4-16c4-482f-a103-bfed7099ab40-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8\" (UID: \"44a93ef4-16c4-482f-a103-bfed7099ab40\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.517758 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44a93ef4-16c4-482f-a103-bfed7099ab40-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8\" (UID: \"44a93ef4-16c4-482f-a103-bfed7099ab40\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.518916 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8"] Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.524499 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42"] Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.525531 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.560424 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42"] Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.584323 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7cm9\" (UniqueName: \"kubernetes.io/projected/7b1e81f6-bcc5-439b-845d-d7f11f18a3ca-kube-api-access-r7cm9\") pod \"obo-prometheus-operator-68bc856cb9-85xhn\" (UID: \"7b1e81f6-bcc5-439b-845d-d7f11f18a3ca\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-85xhn" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.620314 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44a93ef4-16c4-482f-a103-bfed7099ab40-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8\" (UID: \"44a93ef4-16c4-482f-a103-bfed7099ab40\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.620412 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1674b66d-5eb2-4f20-853b-d7321fe6194c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-zrc42\" (UID: \"1674b66d-5eb2-4f20-853b-d7321fe6194c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.620453 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1674b66d-5eb2-4f20-853b-d7321fe6194c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-zrc42\" (UID: \"1674b66d-5eb2-4f20-853b-d7321fe6194c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.620497 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44a93ef4-16c4-482f-a103-bfed7099ab40-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8\" (UID: \"44a93ef4-16c4-482f-a103-bfed7099ab40\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.624515 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44a93ef4-16c4-482f-a103-bfed7099ab40-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8\" (UID: \"44a93ef4-16c4-482f-a103-bfed7099ab40\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.625779 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44a93ef4-16c4-482f-a103-bfed7099ab40-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8\" (UID: \"44a93ef4-16c4-482f-a103-bfed7099ab40\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.726219 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-ww9s6"] Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.731335 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1674b66d-5eb2-4f20-853b-d7321fe6194c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-zrc42\" (UID: \"1674b66d-5eb2-4f20-853b-d7321fe6194c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.731413 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1674b66d-5eb2-4f20-853b-d7321fe6194c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-zrc42\" (UID: \"1674b66d-5eb2-4f20-853b-d7321fe6194c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.740343 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1674b66d-5eb2-4f20-853b-d7321fe6194c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-zrc42\" (UID: \"1674b66d-5eb2-4f20-853b-d7321fe6194c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.740426 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.747693 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.749582 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1674b66d-5eb2-4f20-853b-d7321fe6194c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5fff667df6-zrc42\" (UID: \"1674b66d-5eb2-4f20-853b-d7321fe6194c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.754621 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-zk8k7" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.763942 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-85xhn" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.764732 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-ww9s6"] Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.809964 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.833195 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb6kk\" (UniqueName: \"kubernetes.io/projected/633edb4f-6c36-408b-bd22-3930c2112c90-kube-api-access-fb6kk\") pod \"observability-operator-59bdc8b94-ww9s6\" (UID: \"633edb4f-6c36-408b-bd22-3930c2112c90\") " pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.833275 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/633edb4f-6c36-408b-bd22-3930c2112c90-observability-operator-tls\") pod \"observability-operator-59bdc8b94-ww9s6\" (UID: \"633edb4f-6c36-408b-bd22-3930c2112c90\") " pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.852288 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.935011 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/633edb4f-6c36-408b-bd22-3930c2112c90-observability-operator-tls\") pod \"observability-operator-59bdc8b94-ww9s6\" (UID: \"633edb4f-6c36-408b-bd22-3930c2112c90\") " pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.935732 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb6kk\" (UniqueName: \"kubernetes.io/projected/633edb4f-6c36-408b-bd22-3930c2112c90-kube-api-access-fb6kk\") pod \"observability-operator-59bdc8b94-ww9s6\" (UID: \"633edb4f-6c36-408b-bd22-3930c2112c90\") " pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.946910 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/633edb4f-6c36-408b-bd22-3930c2112c90-observability-operator-tls\") pod \"observability-operator-59bdc8b94-ww9s6\" (UID: \"633edb4f-6c36-408b-bd22-3930c2112c90\") " pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.967128 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7cc86"] Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.971180 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7cc86" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.972379 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb6kk\" (UniqueName: \"kubernetes.io/projected/633edb4f-6c36-408b-bd22-3930c2112c90-kube-api-access-fb6kk\") pod \"observability-operator-59bdc8b94-ww9s6\" (UID: \"633edb4f-6c36-408b-bd22-3930c2112c90\") " pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.974332 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-nwldm" Feb 16 09:58:45 crc kubenswrapper[4814]: I0216 09:58:45.986267 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7cc86"] Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.036521 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5998ae63-01b5-4762-9606-6b5a3f091b5c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7cc86\" (UID: \"5998ae63-01b5-4762-9606-6b5a3f091b5c\") " pod="openshift-operators/perses-operator-5bf474d74f-7cc86" Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.036650 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4975m\" (UniqueName: \"kubernetes.io/projected/5998ae63-01b5-4762-9606-6b5a3f091b5c-kube-api-access-4975m\") pod \"perses-operator-5bf474d74f-7cc86\" (UID: \"5998ae63-01b5-4762-9606-6b5a3f091b5c\") " pod="openshift-operators/perses-operator-5bf474d74f-7cc86" Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.121094 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.138514 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4975m\" (UniqueName: \"kubernetes.io/projected/5998ae63-01b5-4762-9606-6b5a3f091b5c-kube-api-access-4975m\") pod \"perses-operator-5bf474d74f-7cc86\" (UID: \"5998ae63-01b5-4762-9606-6b5a3f091b5c\") " pod="openshift-operators/perses-operator-5bf474d74f-7cc86" Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.139112 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5998ae63-01b5-4762-9606-6b5a3f091b5c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7cc86\" (UID: \"5998ae63-01b5-4762-9606-6b5a3f091b5c\") " pod="openshift-operators/perses-operator-5bf474d74f-7cc86" Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.143372 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5998ae63-01b5-4762-9606-6b5a3f091b5c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7cc86\" (UID: \"5998ae63-01b5-4762-9606-6b5a3f091b5c\") " pod="openshift-operators/perses-operator-5bf474d74f-7cc86" Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.165283 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4975m\" (UniqueName: \"kubernetes.io/projected/5998ae63-01b5-4762-9606-6b5a3f091b5c-kube-api-access-4975m\") pod \"perses-operator-5bf474d74f-7cc86\" (UID: \"5998ae63-01b5-4762-9606-6b5a3f091b5c\") " pod="openshift-operators/perses-operator-5bf474d74f-7cc86" Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.176423 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-85xhn"] Feb 16 09:58:46 crc kubenswrapper[4814]: W0216 09:58:46.199337 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b1e81f6_bcc5_439b_845d_d7f11f18a3ca.slice/crio-b845e14e17328fe51ba75c8483771e7da3e5c4bbf62ef8bade4735a9860d42d4 WatchSource:0}: Error finding container b845e14e17328fe51ba75c8483771e7da3e5c4bbf62ef8bade4735a9860d42d4: Status 404 returned error can't find the container with id b845e14e17328fe51ba75c8483771e7da3e5c4bbf62ef8bade4735a9860d42d4 Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.313393 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7cc86" Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.493121 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-ww9s6"] Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.496937 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8"] Feb 16 09:58:46 crc kubenswrapper[4814]: W0216 09:58:46.517717 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44a93ef4_16c4_482f_a103_bfed7099ab40.slice/crio-c50d1e86aef0bebffde3f2ab9a64e05f05d09b9538f0af4496acf3fa309ff495 WatchSource:0}: Error finding container c50d1e86aef0bebffde3f2ab9a64e05f05d09b9538f0af4496acf3fa309ff495: Status 404 returned error can't find the container with id c50d1e86aef0bebffde3f2ab9a64e05f05d09b9538f0af4496acf3fa309ff495 Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.589458 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42"] Feb 16 09:58:46 crc kubenswrapper[4814]: I0216 09:58:46.790166 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7cc86"] Feb 16 09:58:46 crc kubenswrapper[4814]: W0216 09:58:46.799395 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5998ae63_01b5_4762_9606_6b5a3f091b5c.slice/crio-d19822e5592b212f245e054dd47613268034ddec709c09971154cf6f8608e997 WatchSource:0}: Error finding container d19822e5592b212f245e054dd47613268034ddec709c09971154cf6f8608e997: Status 404 returned error can't find the container with id d19822e5592b212f245e054dd47613268034ddec709c09971154cf6f8608e997 Feb 16 09:58:47 crc kubenswrapper[4814]: I0216 09:58:47.125391 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8" event={"ID":"44a93ef4-16c4-482f-a103-bfed7099ab40","Type":"ContainerStarted","Data":"c50d1e86aef0bebffde3f2ab9a64e05f05d09b9538f0af4496acf3fa309ff495"} Feb 16 09:58:47 crc kubenswrapper[4814]: I0216 09:58:47.126787 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" event={"ID":"633edb4f-6c36-408b-bd22-3930c2112c90","Type":"ContainerStarted","Data":"b487f3cebb7adac3e9dac331e1d4683fb925d9349f9c3a83fe6065395df4dd53"} Feb 16 09:58:47 crc kubenswrapper[4814]: I0216 09:58:47.128902 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-85xhn" event={"ID":"7b1e81f6-bcc5-439b-845d-d7f11f18a3ca","Type":"ContainerStarted","Data":"b845e14e17328fe51ba75c8483771e7da3e5c4bbf62ef8bade4735a9860d42d4"} Feb 16 09:58:47 crc kubenswrapper[4814]: I0216 09:58:47.130410 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7cc86" event={"ID":"5998ae63-01b5-4762-9606-6b5a3f091b5c","Type":"ContainerStarted","Data":"d19822e5592b212f245e054dd47613268034ddec709c09971154cf6f8608e997"} Feb 16 09:58:47 crc kubenswrapper[4814]: I0216 09:58:47.131975 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42" event={"ID":"1674b66d-5eb2-4f20-853b-d7321fe6194c","Type":"ContainerStarted","Data":"f8a3c73b8df5b3d385944e78c713cfaa28d8cf1f6bef1f7d82a53522e4e7cdd2"} Feb 16 09:58:48 crc kubenswrapper[4814]: I0216 09:58:48.455082 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:48 crc kubenswrapper[4814]: I0216 09:58:48.455132 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:48 crc kubenswrapper[4814]: I0216 09:58:48.647042 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:49 crc kubenswrapper[4814]: I0216 09:58:49.277505 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:49 crc kubenswrapper[4814]: I0216 09:58:49.348523 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cx84x"] Feb 16 09:58:50 crc kubenswrapper[4814]: I0216 09:58:50.184437 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:50 crc kubenswrapper[4814]: I0216 09:58:50.273148 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:51 crc kubenswrapper[4814]: I0216 09:58:51.186908 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cx84x" podUID="8a24f28a-0527-41b0-9671-05c696826dc2" containerName="registry-server" containerID="cri-o://0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1" gracePeriod=2 Feb 16 09:58:51 crc kubenswrapper[4814]: I0216 09:58:51.518953 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55cfw"] Feb 16 09:58:52 crc kubenswrapper[4814]: I0216 09:58:52.200874 4814 generic.go:334] "Generic (PLEG): container finished" podID="8a24f28a-0527-41b0-9671-05c696826dc2" containerID="0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1" exitCode=0 Feb 16 09:58:52 crc kubenswrapper[4814]: I0216 09:58:52.201023 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx84x" event={"ID":"8a24f28a-0527-41b0-9671-05c696826dc2","Type":"ContainerDied","Data":"0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1"} Feb 16 09:58:52 crc kubenswrapper[4814]: I0216 09:58:52.201255 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-55cfw" podUID="14d96a0b-6376-4cfe-82cb-065db585c253" containerName="registry-server" containerID="cri-o://d617a9d4e7f8db255884722610f82692b62531a074b3b1f2f31e4f7fee162c41" gracePeriod=2 Feb 16 09:58:53 crc kubenswrapper[4814]: I0216 09:58:53.214358 4814 generic.go:334] "Generic (PLEG): container finished" podID="14d96a0b-6376-4cfe-82cb-065db585c253" containerID="d617a9d4e7f8db255884722610f82692b62531a074b3b1f2f31e4f7fee162c41" exitCode=0 Feb 16 09:58:53 crc kubenswrapper[4814]: I0216 09:58:53.214424 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55cfw" event={"ID":"14d96a0b-6376-4cfe-82cb-065db585c253","Type":"ContainerDied","Data":"d617a9d4e7f8db255884722610f82692b62531a074b3b1f2f31e4f7fee162c41"} Feb 16 09:58:58 crc kubenswrapper[4814]: E0216 09:58:58.456633 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1 is running failed: container process not found" containerID="0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 09:58:58 crc kubenswrapper[4814]: E0216 09:58:58.459574 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1 is running failed: container process not found" containerID="0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 09:58:58 crc kubenswrapper[4814]: E0216 09:58:58.460727 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1 is running failed: container process not found" containerID="0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 09:58:58 crc kubenswrapper[4814]: E0216 09:58:58.460776 4814 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-cx84x" podUID="8a24f28a-0527-41b0-9671-05c696826dc2" containerName="registry-server" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.189383 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.202300 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.267257 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55cfw" event={"ID":"14d96a0b-6376-4cfe-82cb-065db585c253","Type":"ContainerDied","Data":"6b5c6dcb08422157bbe933713d8cd8c76cc74cffb190ada43de513fcac6525be"} Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.267316 4814 scope.go:117] "RemoveContainer" containerID="d617a9d4e7f8db255884722610f82692b62531a074b3b1f2f31e4f7fee162c41" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.267452 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55cfw" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.274788 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cx84x" event={"ID":"8a24f28a-0527-41b0-9671-05c696826dc2","Type":"ContainerDied","Data":"dafcdea1379ff704eb5bae23c9edcf3534fcfd2767eed21d3074ec52e7894d80"} Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.274863 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cx84x" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.279549 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8qmb\" (UniqueName: \"kubernetes.io/projected/8a24f28a-0527-41b0-9671-05c696826dc2-kube-api-access-k8qmb\") pod \"8a24f28a-0527-41b0-9671-05c696826dc2\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.279619 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-utilities\") pod \"14d96a0b-6376-4cfe-82cb-065db585c253\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.279718 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-catalog-content\") pod \"14d96a0b-6376-4cfe-82cb-065db585c253\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.279747 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-catalog-content\") pod \"8a24f28a-0527-41b0-9671-05c696826dc2\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.279783 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-utilities\") pod \"8a24f28a-0527-41b0-9671-05c696826dc2\" (UID: \"8a24f28a-0527-41b0-9671-05c696826dc2\") " Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.279819 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzrkv\" (UniqueName: \"kubernetes.io/projected/14d96a0b-6376-4cfe-82cb-065db585c253-kube-api-access-tzrkv\") pod \"14d96a0b-6376-4cfe-82cb-065db585c253\" (UID: \"14d96a0b-6376-4cfe-82cb-065db585c253\") " Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.281940 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-utilities" (OuterVolumeSpecName: "utilities") pod "14d96a0b-6376-4cfe-82cb-065db585c253" (UID: "14d96a0b-6376-4cfe-82cb-065db585c253"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.281232 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-utilities" (OuterVolumeSpecName: "utilities") pod "8a24f28a-0527-41b0-9671-05c696826dc2" (UID: "8a24f28a-0527-41b0-9671-05c696826dc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.297455 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a24f28a-0527-41b0-9671-05c696826dc2-kube-api-access-k8qmb" (OuterVolumeSpecName: "kube-api-access-k8qmb") pod "8a24f28a-0527-41b0-9671-05c696826dc2" (UID: "8a24f28a-0527-41b0-9671-05c696826dc2"). InnerVolumeSpecName "kube-api-access-k8qmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.311714 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14d96a0b-6376-4cfe-82cb-065db585c253-kube-api-access-tzrkv" (OuterVolumeSpecName: "kube-api-access-tzrkv") pod "14d96a0b-6376-4cfe-82cb-065db585c253" (UID: "14d96a0b-6376-4cfe-82cb-065db585c253"). InnerVolumeSpecName "kube-api-access-tzrkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.339723 4814 scope.go:117] "RemoveContainer" containerID="61d49ddd62de24877bcd01294ecedc801e41fa591400bac0ad660d9db806288e" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.354073 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a24f28a-0527-41b0-9671-05c696826dc2" (UID: "8a24f28a-0527-41b0-9671-05c696826dc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.384717 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.384949 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.385008 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a24f28a-0527-41b0-9671-05c696826dc2-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.385087 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzrkv\" (UniqueName: \"kubernetes.io/projected/14d96a0b-6376-4cfe-82cb-065db585c253-kube-api-access-tzrkv\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.385143 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8qmb\" (UniqueName: \"kubernetes.io/projected/8a24f28a-0527-41b0-9671-05c696826dc2-kube-api-access-k8qmb\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.390497 4814 scope.go:117] "RemoveContainer" containerID="6f7670f07aa03107c78bd10b92498e34212af5ce633f58826d8a577472e64e6f" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.412843 4814 scope.go:117] "RemoveContainer" containerID="0dbc4b6df8541d186b75bd53d262fa00080deef84f857d246493b51269b67ff1" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.466322 4814 scope.go:117] "RemoveContainer" containerID="253ea39f6f7daea4cf0b4ae6feaf473b9e4c03fef84eb3ea383916096029bdac" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.478192 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14d96a0b-6376-4cfe-82cb-065db585c253" (UID: "14d96a0b-6376-4cfe-82cb-065db585c253"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.487933 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14d96a0b-6376-4cfe-82cb-065db585c253-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.499463 4814 scope.go:117] "RemoveContainer" containerID="3e93f6b418eca0cd3420f9ce1b30508a60727b5299e53638d97631ae8c613dc6" Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.617086 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55cfw"] Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.619251 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-55cfw"] Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.631958 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cx84x"] Feb 16 09:58:59 crc kubenswrapper[4814]: I0216 09:58:59.637090 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cx84x"] Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.283454 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8" event={"ID":"44a93ef4-16c4-482f-a103-bfed7099ab40","Type":"ContainerStarted","Data":"bb9ae4adce6aaaa54a644b8fa1b1038a3ec92e78606e2a47842165ae3940eb78"} Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.285315 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" event={"ID":"633edb4f-6c36-408b-bd22-3930c2112c90","Type":"ContainerStarted","Data":"5baf742578003b0166dbcf974d496f1799a65d0697102dc991cbd98a94d003f6"} Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.285924 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.287126 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-85xhn" event={"ID":"7b1e81f6-bcc5-439b-845d-d7f11f18a3ca","Type":"ContainerStarted","Data":"5bfbeb0dc9d181403a72ebaaf07c76138be26d5a481a97ca7f8755432f0e328c"} Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.288334 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.291904 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7cc86" event={"ID":"5998ae63-01b5-4762-9606-6b5a3f091b5c","Type":"ContainerStarted","Data":"f304849132fc3551cc489be80a8a2afa654be568389b27f4617142a20bf764a2"} Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.292003 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-7cc86" Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.294570 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42" event={"ID":"1674b66d-5eb2-4f20-853b-d7321fe6194c","Type":"ContainerStarted","Data":"c802d82eae7cbd95079752145a02355726070b232c7e28951ee656120273abed"} Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.314617 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8" podStartSLOduration=2.654813408 podStartE2EDuration="15.314594799s" podCreationTimestamp="2026-02-16 09:58:45 +0000 UTC" firstStartedPulling="2026-02-16 09:58:46.529892135 +0000 UTC m=+784.223048315" lastFinishedPulling="2026-02-16 09:58:59.189673526 +0000 UTC m=+796.882829706" observedRunningTime="2026-02-16 09:59:00.310132874 +0000 UTC m=+798.003289054" watchObservedRunningTime="2026-02-16 09:59:00.314594799 +0000 UTC m=+798.007750979" Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.348354 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-85xhn" podStartSLOduration=2.4000037499999998 podStartE2EDuration="15.348324971s" podCreationTimestamp="2026-02-16 09:58:45 +0000 UTC" firstStartedPulling="2026-02-16 09:58:46.216015777 +0000 UTC m=+783.909171957" lastFinishedPulling="2026-02-16 09:58:59.164336958 +0000 UTC m=+796.857493178" observedRunningTime="2026-02-16 09:59:00.344406701 +0000 UTC m=+798.037562881" watchObservedRunningTime="2026-02-16 09:59:00.348324971 +0000 UTC m=+798.041481141" Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.451999 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-ww9s6" podStartSLOduration=2.780974433 podStartE2EDuration="15.451974367s" podCreationTimestamp="2026-02-16 09:58:45 +0000 UTC" firstStartedPulling="2026-02-16 09:58:46.529135614 +0000 UTC m=+784.222291794" lastFinishedPulling="2026-02-16 09:58:59.200135548 +0000 UTC m=+796.893291728" observedRunningTime="2026-02-16 09:59:00.410418956 +0000 UTC m=+798.103575136" watchObservedRunningTime="2026-02-16 09:59:00.451974367 +0000 UTC m=+798.145130537" Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.454625 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-7cc86" podStartSLOduration=3.067970828 podStartE2EDuration="15.45461648s" podCreationTimestamp="2026-02-16 09:58:45 +0000 UTC" firstStartedPulling="2026-02-16 09:58:46.801595614 +0000 UTC m=+784.494751794" lastFinishedPulling="2026-02-16 09:58:59.188241266 +0000 UTC m=+796.881397446" observedRunningTime="2026-02-16 09:59:00.449092596 +0000 UTC m=+798.142248776" watchObservedRunningTime="2026-02-16 09:59:00.45461648 +0000 UTC m=+798.147772660" Feb 16 09:59:00 crc kubenswrapper[4814]: I0216 09:59:00.499802 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5fff667df6-zrc42" podStartSLOduration=2.9233849 podStartE2EDuration="15.499765641s" podCreationTimestamp="2026-02-16 09:58:45 +0000 UTC" firstStartedPulling="2026-02-16 09:58:46.594363966 +0000 UTC m=+784.287520146" lastFinishedPulling="2026-02-16 09:58:59.170744697 +0000 UTC m=+796.863900887" observedRunningTime="2026-02-16 09:59:00.494124234 +0000 UTC m=+798.187280414" watchObservedRunningTime="2026-02-16 09:59:00.499765641 +0000 UTC m=+798.192921821" Feb 16 09:59:01 crc kubenswrapper[4814]: I0216 09:59:01.003148 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14d96a0b-6376-4cfe-82cb-065db585c253" path="/var/lib/kubelet/pods/14d96a0b-6376-4cfe-82cb-065db585c253/volumes" Feb 16 09:59:01 crc kubenswrapper[4814]: I0216 09:59:01.004664 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a24f28a-0527-41b0-9671-05c696826dc2" path="/var/lib/kubelet/pods/8a24f28a-0527-41b0-9671-05c696826dc2/volumes" Feb 16 09:59:06 crc kubenswrapper[4814]: I0216 09:59:06.318276 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-7cc86" Feb 16 09:59:07 crc kubenswrapper[4814]: I0216 09:59:07.960168 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:59:07 crc kubenswrapper[4814]: I0216 09:59:07.960270 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.784771 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576"] Feb 16 09:59:22 crc kubenswrapper[4814]: E0216 09:59:22.786067 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a24f28a-0527-41b0-9671-05c696826dc2" containerName="registry-server" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.786084 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a24f28a-0527-41b0-9671-05c696826dc2" containerName="registry-server" Feb 16 09:59:22 crc kubenswrapper[4814]: E0216 09:59:22.786107 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a24f28a-0527-41b0-9671-05c696826dc2" containerName="extract-utilities" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.786114 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a24f28a-0527-41b0-9671-05c696826dc2" containerName="extract-utilities" Feb 16 09:59:22 crc kubenswrapper[4814]: E0216 09:59:22.786124 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d96a0b-6376-4cfe-82cb-065db585c253" containerName="extract-utilities" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.786131 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d96a0b-6376-4cfe-82cb-065db585c253" containerName="extract-utilities" Feb 16 09:59:22 crc kubenswrapper[4814]: E0216 09:59:22.786143 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a24f28a-0527-41b0-9671-05c696826dc2" containerName="extract-content" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.786167 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a24f28a-0527-41b0-9671-05c696826dc2" containerName="extract-content" Feb 16 09:59:22 crc kubenswrapper[4814]: E0216 09:59:22.786177 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d96a0b-6376-4cfe-82cb-065db585c253" containerName="extract-content" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.786183 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d96a0b-6376-4cfe-82cb-065db585c253" containerName="extract-content" Feb 16 09:59:22 crc kubenswrapper[4814]: E0216 09:59:22.786193 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d96a0b-6376-4cfe-82cb-065db585c253" containerName="registry-server" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.786199 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d96a0b-6376-4cfe-82cb-065db585c253" containerName="registry-server" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.786322 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d96a0b-6376-4cfe-82cb-065db585c253" containerName="registry-server" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.786342 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a24f28a-0527-41b0-9671-05c696826dc2" containerName="registry-server" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.787299 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.789929 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.798415 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576"] Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.848908 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnprh\" (UniqueName: \"kubernetes.io/projected/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-kube-api-access-pnprh\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.849032 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.849057 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.950982 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.951042 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.951118 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnprh\" (UniqueName: \"kubernetes.io/projected/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-kube-api-access-pnprh\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.951683 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.952096 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:22 crc kubenswrapper[4814]: I0216 09:59:22.973929 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnprh\" (UniqueName: \"kubernetes.io/projected/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-kube-api-access-pnprh\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:23 crc kubenswrapper[4814]: I0216 09:59:23.109165 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:23 crc kubenswrapper[4814]: I0216 09:59:23.360981 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576"] Feb 16 09:59:23 crc kubenswrapper[4814]: I0216 09:59:23.449687 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" event={"ID":"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5","Type":"ContainerStarted","Data":"eaab0208155489ea0fd7a431b5658b1127a5cb9c9a8cec8e3c93d8a26562509e"} Feb 16 09:59:24 crc kubenswrapper[4814]: I0216 09:59:24.459111 4814 generic.go:334] "Generic (PLEG): container finished" podID="cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" containerID="8eec440955f6e1c524b6ad13310d83014caa7ed454ccc6777be222156c98163f" exitCode=0 Feb 16 09:59:24 crc kubenswrapper[4814]: I0216 09:59:24.459180 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" event={"ID":"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5","Type":"ContainerDied","Data":"8eec440955f6e1c524b6ad13310d83014caa7ed454ccc6777be222156c98163f"} Feb 16 09:59:27 crc kubenswrapper[4814]: I0216 09:59:27.489794 4814 generic.go:334] "Generic (PLEG): container finished" podID="cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" containerID="59dc616c63c40f9249551361f80bc6afd86a61308cddfa1685be54d708b4b52a" exitCode=0 Feb 16 09:59:27 crc kubenswrapper[4814]: I0216 09:59:27.489868 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" event={"ID":"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5","Type":"ContainerDied","Data":"59dc616c63c40f9249551361f80bc6afd86a61308cddfa1685be54d708b4b52a"} Feb 16 09:59:28 crc kubenswrapper[4814]: I0216 09:59:28.498624 4814 generic.go:334] "Generic (PLEG): container finished" podID="cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" containerID="c3da46b978dd590ca222bc8fa92f50f3dd92bf758ac8be1fd27727e9d09d8763" exitCode=0 Feb 16 09:59:28 crc kubenswrapper[4814]: I0216 09:59:28.498695 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" event={"ID":"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5","Type":"ContainerDied","Data":"c3da46b978dd590ca222bc8fa92f50f3dd92bf758ac8be1fd27727e9d09d8763"} Feb 16 09:59:29 crc kubenswrapper[4814]: I0216 09:59:29.772088 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:29 crc kubenswrapper[4814]: I0216 09:59:29.854093 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-util\") pod \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " Feb 16 09:59:29 crc kubenswrapper[4814]: I0216 09:59:29.854159 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-bundle\") pod \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " Feb 16 09:59:29 crc kubenswrapper[4814]: I0216 09:59:29.854266 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnprh\" (UniqueName: \"kubernetes.io/projected/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-kube-api-access-pnprh\") pod \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\" (UID: \"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5\") " Feb 16 09:59:29 crc kubenswrapper[4814]: I0216 09:59:29.855042 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-bundle" (OuterVolumeSpecName: "bundle") pod "cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" (UID: "cf16c7fb-2e89-4cc7-b19f-6ac91d078db5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:59:29 crc kubenswrapper[4814]: I0216 09:59:29.862556 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-kube-api-access-pnprh" (OuterVolumeSpecName: "kube-api-access-pnprh") pod "cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" (UID: "cf16c7fb-2e89-4cc7-b19f-6ac91d078db5"). InnerVolumeSpecName "kube-api-access-pnprh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 09:59:29 crc kubenswrapper[4814]: I0216 09:59:29.866025 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-util" (OuterVolumeSpecName: "util") pod "cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" (UID: "cf16c7fb-2e89-4cc7-b19f-6ac91d078db5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 09:59:29 crc kubenswrapper[4814]: I0216 09:59:29.956132 4814 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-util\") on node \"crc\" DevicePath \"\"" Feb 16 09:59:29 crc kubenswrapper[4814]: I0216 09:59:29.956174 4814 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 09:59:29 crc kubenswrapper[4814]: I0216 09:59:29.956184 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnprh\" (UniqueName: \"kubernetes.io/projected/cf16c7fb-2e89-4cc7-b19f-6ac91d078db5-kube-api-access-pnprh\") on node \"crc\" DevicePath \"\"" Feb 16 09:59:30 crc kubenswrapper[4814]: I0216 09:59:30.518967 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" event={"ID":"cf16c7fb-2e89-4cc7-b19f-6ac91d078db5","Type":"ContainerDied","Data":"eaab0208155489ea0fd7a431b5658b1127a5cb9c9a8cec8e3c93d8a26562509e"} Feb 16 09:59:30 crc kubenswrapper[4814]: I0216 09:59:30.519022 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaab0208155489ea0fd7a431b5658b1127a5cb9c9a8cec8e3c93d8a26562509e" Feb 16 09:59:30 crc kubenswrapper[4814]: I0216 09:59:30.519513 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.604846 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nccwr"] Feb 16 09:59:31 crc kubenswrapper[4814]: E0216 09:59:31.605798 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" containerName="util" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.605818 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" containerName="util" Feb 16 09:59:31 crc kubenswrapper[4814]: E0216 09:59:31.605835 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" containerName="pull" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.605842 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" containerName="pull" Feb 16 09:59:31 crc kubenswrapper[4814]: E0216 09:59:31.605861 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" containerName="extract" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.605871 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" containerName="extract" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.606020 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf16c7fb-2e89-4cc7-b19f-6ac91d078db5" containerName="extract" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.606737 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-nccwr" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.608855 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.609111 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-7rcxd" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.609273 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.616654 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nccwr"] Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.689133 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kr62\" (UniqueName: \"kubernetes.io/projected/6dac0f48-5703-4178-b06c-51edae8f0735-kube-api-access-2kr62\") pod \"nmstate-operator-694c9596b7-nccwr\" (UID: \"6dac0f48-5703-4178-b06c-51edae8f0735\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nccwr" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.790906 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kr62\" (UniqueName: \"kubernetes.io/projected/6dac0f48-5703-4178-b06c-51edae8f0735-kube-api-access-2kr62\") pod \"nmstate-operator-694c9596b7-nccwr\" (UID: \"6dac0f48-5703-4178-b06c-51edae8f0735\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nccwr" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.813620 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kr62\" (UniqueName: \"kubernetes.io/projected/6dac0f48-5703-4178-b06c-51edae8f0735-kube-api-access-2kr62\") pod \"nmstate-operator-694c9596b7-nccwr\" (UID: \"6dac0f48-5703-4178-b06c-51edae8f0735\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-nccwr" Feb 16 09:59:31 crc kubenswrapper[4814]: I0216 09:59:31.925101 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-nccwr" Feb 16 09:59:32 crc kubenswrapper[4814]: I0216 09:59:32.359387 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-nccwr"] Feb 16 09:59:32 crc kubenswrapper[4814]: W0216 09:59:32.365743 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dac0f48_5703_4178_b06c_51edae8f0735.slice/crio-812e662294ba4829a84d4c6da2da05bbdb3c34ae3a3e4fb4c035f0659f40efe9 WatchSource:0}: Error finding container 812e662294ba4829a84d4c6da2da05bbdb3c34ae3a3e4fb4c035f0659f40efe9: Status 404 returned error can't find the container with id 812e662294ba4829a84d4c6da2da05bbdb3c34ae3a3e4fb4c035f0659f40efe9 Feb 16 09:59:32 crc kubenswrapper[4814]: I0216 09:59:32.534855 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-nccwr" event={"ID":"6dac0f48-5703-4178-b06c-51edae8f0735","Type":"ContainerStarted","Data":"812e662294ba4829a84d4c6da2da05bbdb3c34ae3a3e4fb4c035f0659f40efe9"} Feb 16 09:59:35 crc kubenswrapper[4814]: I0216 09:59:35.560300 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-nccwr" event={"ID":"6dac0f48-5703-4178-b06c-51edae8f0735","Type":"ContainerStarted","Data":"55044747dade688b0a107b29be28f0628207799ba9b84fac1546804d1882fee8"} Feb 16 09:59:35 crc kubenswrapper[4814]: I0216 09:59:35.582436 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-nccwr" podStartSLOduration=1.953315788 podStartE2EDuration="4.582409919s" podCreationTimestamp="2026-02-16 09:59:31 +0000 UTC" firstStartedPulling="2026-02-16 09:59:32.369974023 +0000 UTC m=+830.063130203" lastFinishedPulling="2026-02-16 09:59:34.999068154 +0000 UTC m=+832.692224334" observedRunningTime="2026-02-16 09:59:35.580659851 +0000 UTC m=+833.273816031" watchObservedRunningTime="2026-02-16 09:59:35.582409919 +0000 UTC m=+833.275566109" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.516181 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr"] Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.517592 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.522094 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-b2krx" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.545234 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr"] Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.550825 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv"] Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.551889 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.557331 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.579450 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4zq\" (UniqueName: \"kubernetes.io/projected/6959ecae-2538-428c-956d-edf875e58947-kube-api-access-4v4zq\") pod \"nmstate-metrics-58c85c668d-c4tzr\" (UID: \"6959ecae-2538-428c-956d-edf875e58947\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.591324 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lvh27"] Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.592450 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.602182 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv"] Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.681210 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-nmstate-lock\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.681409 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v4zq\" (UniqueName: \"kubernetes.io/projected/6959ecae-2538-428c-956d-edf875e58947-kube-api-access-4v4zq\") pod \"nmstate-metrics-58c85c668d-c4tzr\" (UID: \"6959ecae-2538-428c-956d-edf875e58947\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.681473 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8b9f\" (UniqueName: \"kubernetes.io/projected/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-kube-api-access-x8b9f\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.681512 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4dc40630-922d-4c2a-b61b-2dc11a8aa9fd-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-fbxdv\" (UID: \"4dc40630-922d-4c2a-b61b-2dc11a8aa9fd\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.681575 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxfcw\" (UniqueName: \"kubernetes.io/projected/4dc40630-922d-4c2a-b61b-2dc11a8aa9fd-kube-api-access-wxfcw\") pod \"nmstate-webhook-866bcb46dc-fbxdv\" (UID: \"4dc40630-922d-4c2a-b61b-2dc11a8aa9fd\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.681600 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-dbus-socket\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.681631 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-ovs-socket\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.712745 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v4zq\" (UniqueName: \"kubernetes.io/projected/6959ecae-2538-428c-956d-edf875e58947-kube-api-access-4v4zq\") pod \"nmstate-metrics-58c85c668d-c4tzr\" (UID: \"6959ecae-2538-428c-956d-edf875e58947\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.720398 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582"] Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.724838 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.727395 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.728024 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2wp4t" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.736875 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582"] Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.737212 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.782595 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxfcw\" (UniqueName: \"kubernetes.io/projected/4dc40630-922d-4c2a-b61b-2dc11a8aa9fd-kube-api-access-wxfcw\") pod \"nmstate-webhook-866bcb46dc-fbxdv\" (UID: \"4dc40630-922d-4c2a-b61b-2dc11a8aa9fd\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.783088 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-dbus-socket\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.783179 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-ovs-socket\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.783266 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-nmstate-lock\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.783356 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l52f\" (UniqueName: \"kubernetes.io/projected/4b6baf37-55ba-48ef-bae6-c74b2f647956-kube-api-access-8l52f\") pod \"nmstate-console-plugin-5c78fc5d65-nn582\" (UID: \"4b6baf37-55ba-48ef-bae6-c74b2f647956\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.783399 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-ovs-socket\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.783568 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4b6baf37-55ba-48ef-bae6-c74b2f647956-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nn582\" (UID: \"4b6baf37-55ba-48ef-bae6-c74b2f647956\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.783673 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8b9f\" (UniqueName: \"kubernetes.io/projected/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-kube-api-access-x8b9f\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.783757 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-nmstate-lock\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.783796 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b6baf37-55ba-48ef-bae6-c74b2f647956-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nn582\" (UID: \"4b6baf37-55ba-48ef-bae6-c74b2f647956\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.783934 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4dc40630-922d-4c2a-b61b-2dc11a8aa9fd-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-fbxdv\" (UID: \"4dc40630-922d-4c2a-b61b-2dc11a8aa9fd\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.783942 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-dbus-socket\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: E0216 09:59:36.784032 4814 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 16 09:59:36 crc kubenswrapper[4814]: E0216 09:59:36.784211 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dc40630-922d-4c2a-b61b-2dc11a8aa9fd-tls-key-pair podName:4dc40630-922d-4c2a-b61b-2dc11a8aa9fd nodeName:}" failed. No retries permitted until 2026-02-16 09:59:37.284179348 +0000 UTC m=+834.977335528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/4dc40630-922d-4c2a-b61b-2dc11a8aa9fd-tls-key-pair") pod "nmstate-webhook-866bcb46dc-fbxdv" (UID: "4dc40630-922d-4c2a-b61b-2dc11a8aa9fd") : secret "openshift-nmstate-webhook" not found Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.803411 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8b9f\" (UniqueName: \"kubernetes.io/projected/4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1-kube-api-access-x8b9f\") pod \"nmstate-handler-lvh27\" (UID: \"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1\") " pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.807059 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxfcw\" (UniqueName: \"kubernetes.io/projected/4dc40630-922d-4c2a-b61b-2dc11a8aa9fd-kube-api-access-wxfcw\") pod \"nmstate-webhook-866bcb46dc-fbxdv\" (UID: \"4dc40630-922d-4c2a-b61b-2dc11a8aa9fd\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.840373 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.889055 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l52f\" (UniqueName: \"kubernetes.io/projected/4b6baf37-55ba-48ef-bae6-c74b2f647956-kube-api-access-8l52f\") pod \"nmstate-console-plugin-5c78fc5d65-nn582\" (UID: \"4b6baf37-55ba-48ef-bae6-c74b2f647956\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.889143 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4b6baf37-55ba-48ef-bae6-c74b2f647956-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nn582\" (UID: \"4b6baf37-55ba-48ef-bae6-c74b2f647956\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.889169 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b6baf37-55ba-48ef-bae6-c74b2f647956-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nn582\" (UID: \"4b6baf37-55ba-48ef-bae6-c74b2f647956\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.890431 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4b6baf37-55ba-48ef-bae6-c74b2f647956-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-nn582\" (UID: \"4b6baf37-55ba-48ef-bae6-c74b2f647956\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.897639 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b6baf37-55ba-48ef-bae6-c74b2f647956-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-nn582\" (UID: \"4b6baf37-55ba-48ef-bae6-c74b2f647956\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.913438 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.916464 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l52f\" (UniqueName: \"kubernetes.io/projected/4b6baf37-55ba-48ef-bae6-c74b2f647956-kube-api-access-8l52f\") pod \"nmstate-console-plugin-5c78fc5d65-nn582\" (UID: \"4b6baf37-55ba-48ef-bae6-c74b2f647956\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.953143 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-78699459f5-mlv57"] Feb 16 09:59:36 crc kubenswrapper[4814]: W0216 09:59:36.953679 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d8e1bb4_3c1d_43e2_b165_83e51d57ebb1.slice/crio-52fa8eda018a1279cedc6e5d1124ff8ab4c8b4b5d41f9e9092d54f52b99d9e16 WatchSource:0}: Error finding container 52fa8eda018a1279cedc6e5d1124ff8ab4c8b4b5d41f9e9092d54f52b99d9e16: Status 404 returned error can't find the container with id 52fa8eda018a1279cedc6e5d1124ff8ab4c8b4b5d41f9e9092d54f52b99d9e16 Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.959665 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:36 crc kubenswrapper[4814]: I0216 09:59:36.978067 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78699459f5-mlv57"] Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.064293 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.117307 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-console-config\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.117623 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-trusted-ca-bundle\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.117664 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-oauth-serving-cert\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.117688 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb95bbec-f3ed-4690-9685-95a02aa07b0c-console-serving-cert\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.117710 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fb95bbec-f3ed-4690-9685-95a02aa07b0c-console-oauth-config\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.117724 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg7l5\" (UniqueName: \"kubernetes.io/projected/fb95bbec-f3ed-4690-9685-95a02aa07b0c-kube-api-access-gg7l5\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.117816 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-service-ca\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.196218 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr"] Feb 16 09:59:37 crc kubenswrapper[4814]: W0216 09:59:37.201309 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6959ecae_2538_428c_956d_edf875e58947.slice/crio-fc7e7daeba2d8ddf54acecef39fc6ac56d603590c589225fe98c2f4543919952 WatchSource:0}: Error finding container fc7e7daeba2d8ddf54acecef39fc6ac56d603590c589225fe98c2f4543919952: Status 404 returned error can't find the container with id fc7e7daeba2d8ddf54acecef39fc6ac56d603590c589225fe98c2f4543919952 Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.219778 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-service-ca\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.219865 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-console-config\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.219899 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-trusted-ca-bundle\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.219958 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-oauth-serving-cert\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.220012 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb95bbec-f3ed-4690-9685-95a02aa07b0c-console-serving-cert\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.220047 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fb95bbec-f3ed-4690-9685-95a02aa07b0c-console-oauth-config\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.220096 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg7l5\" (UniqueName: \"kubernetes.io/projected/fb95bbec-f3ed-4690-9685-95a02aa07b0c-kube-api-access-gg7l5\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.221126 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-console-config\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.221589 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-service-ca\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.223230 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-oauth-serving-cert\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.224408 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb95bbec-f3ed-4690-9685-95a02aa07b0c-trusted-ca-bundle\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.224763 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb95bbec-f3ed-4690-9685-95a02aa07b0c-console-serving-cert\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.228674 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fb95bbec-f3ed-4690-9685-95a02aa07b0c-console-oauth-config\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.239298 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg7l5\" (UniqueName: \"kubernetes.io/projected/fb95bbec-f3ed-4690-9685-95a02aa07b0c-kube-api-access-gg7l5\") pod \"console-78699459f5-mlv57\" (UID: \"fb95bbec-f3ed-4690-9685-95a02aa07b0c\") " pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.301486 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582"] Feb 16 09:59:37 crc kubenswrapper[4814]: W0216 09:59:37.303979 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b6baf37_55ba_48ef_bae6_c74b2f647956.slice/crio-f93c8ea9dd3ca87e6ba7308d59e0abe9f6e621facc79d98ab7c47adee451013a WatchSource:0}: Error finding container f93c8ea9dd3ca87e6ba7308d59e0abe9f6e621facc79d98ab7c47adee451013a: Status 404 returned error can't find the container with id f93c8ea9dd3ca87e6ba7308d59e0abe9f6e621facc79d98ab7c47adee451013a Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.321799 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4dc40630-922d-4c2a-b61b-2dc11a8aa9fd-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-fbxdv\" (UID: \"4dc40630-922d-4c2a-b61b-2dc11a8aa9fd\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.325288 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4dc40630-922d-4c2a-b61b-2dc11a8aa9fd-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-fbxdv\" (UID: \"4dc40630-922d-4c2a-b61b-2dc11a8aa9fd\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.343317 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.472284 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.583337 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr" event={"ID":"6959ecae-2538-428c-956d-edf875e58947","Type":"ContainerStarted","Data":"fc7e7daeba2d8ddf54acecef39fc6ac56d603590c589225fe98c2f4543919952"} Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.585320 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lvh27" event={"ID":"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1","Type":"ContainerStarted","Data":"52fa8eda018a1279cedc6e5d1124ff8ab4c8b4b5d41f9e9092d54f52b99d9e16"} Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.587719 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" event={"ID":"4b6baf37-55ba-48ef-bae6-c74b2f647956","Type":"ContainerStarted","Data":"f93c8ea9dd3ca87e6ba7308d59e0abe9f6e621facc79d98ab7c47adee451013a"} Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.699964 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv"] Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.756514 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78699459f5-mlv57"] Feb 16 09:59:37 crc kubenswrapper[4814]: W0216 09:59:37.762563 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb95bbec_f3ed_4690_9685_95a02aa07b0c.slice/crio-c61b753c6f4c24630c9a13a9b2b2d11c179a5e0451b778f591e7e425bf419133 WatchSource:0}: Error finding container c61b753c6f4c24630c9a13a9b2b2d11c179a5e0451b778f591e7e425bf419133: Status 404 returned error can't find the container with id c61b753c6f4c24630c9a13a9b2b2d11c179a5e0451b778f591e7e425bf419133 Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.961325 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.961455 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.961560 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.962842 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d06be8c91c3c8023e6f3b4f7a6fc189b666a39a9481db1d4140f47bf92416f2"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 09:59:37 crc kubenswrapper[4814]: I0216 09:59:37.962938 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://0d06be8c91c3c8023e6f3b4f7a6fc189b666a39a9481db1d4140f47bf92416f2" gracePeriod=600 Feb 16 09:59:38 crc kubenswrapper[4814]: I0216 09:59:38.602369 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="0d06be8c91c3c8023e6f3b4f7a6fc189b666a39a9481db1d4140f47bf92416f2" exitCode=0 Feb 16 09:59:38 crc kubenswrapper[4814]: I0216 09:59:38.602471 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"0d06be8c91c3c8023e6f3b4f7a6fc189b666a39a9481db1d4140f47bf92416f2"} Feb 16 09:59:38 crc kubenswrapper[4814]: I0216 09:59:38.602612 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"5b20fcb56d62b3faba2758b4da10c035a51c1093d8bbea8f8006bcade37f9f53"} Feb 16 09:59:38 crc kubenswrapper[4814]: I0216 09:59:38.602639 4814 scope.go:117] "RemoveContainer" containerID="2e13b721f455f43965f9fa3ab22df7aa7002ec343bfde28e25849e50a929cccb" Feb 16 09:59:38 crc kubenswrapper[4814]: I0216 09:59:38.606374 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78699459f5-mlv57" event={"ID":"fb95bbec-f3ed-4690-9685-95a02aa07b0c","Type":"ContainerStarted","Data":"7fa927108d0951704128c243207c97d740c1d6c9408fcf7280c23236033ab407"} Feb 16 09:59:38 crc kubenswrapper[4814]: I0216 09:59:38.606429 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78699459f5-mlv57" event={"ID":"fb95bbec-f3ed-4690-9685-95a02aa07b0c","Type":"ContainerStarted","Data":"c61b753c6f4c24630c9a13a9b2b2d11c179a5e0451b778f591e7e425bf419133"} Feb 16 09:59:38 crc kubenswrapper[4814]: I0216 09:59:38.608068 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" event={"ID":"4dc40630-922d-4c2a-b61b-2dc11a8aa9fd","Type":"ContainerStarted","Data":"0006436163a7d0eb6048aded0eaa6c8bff2ca379db56f12127580af3d28a04a2"} Feb 16 09:59:38 crc kubenswrapper[4814]: I0216 09:59:38.650667 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-78699459f5-mlv57" podStartSLOduration=2.650643015 podStartE2EDuration="2.650643015s" podCreationTimestamp="2026-02-16 09:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 09:59:38.646608353 +0000 UTC m=+836.339764533" watchObservedRunningTime="2026-02-16 09:59:38.650643015 +0000 UTC m=+836.343799195" Feb 16 09:59:41 crc kubenswrapper[4814]: I0216 09:59:41.640422 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" event={"ID":"4dc40630-922d-4c2a-b61b-2dc11a8aa9fd","Type":"ContainerStarted","Data":"8e4158f8ef2a4db564900aa84cab4421aaf6c133b16928a517808368cf5d658e"} Feb 16 09:59:41 crc kubenswrapper[4814]: I0216 09:59:41.641519 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" Feb 16 09:59:41 crc kubenswrapper[4814]: I0216 09:59:41.642591 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr" event={"ID":"6959ecae-2538-428c-956d-edf875e58947","Type":"ContainerStarted","Data":"3a0bea9485018bd373c16525194cb4703a204c65007fdbb66ad0e9990bd1c196"} Feb 16 09:59:41 crc kubenswrapper[4814]: I0216 09:59:41.644384 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" event={"ID":"4b6baf37-55ba-48ef-bae6-c74b2f647956","Type":"ContainerStarted","Data":"8add8ad489b310f8821e62c17d32c7ea3470aea6a8e5725dfaa2000e02fd809c"} Feb 16 09:59:41 crc kubenswrapper[4814]: I0216 09:59:41.660811 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" podStartSLOduration=2.119397335 podStartE2EDuration="5.660783209s" podCreationTimestamp="2026-02-16 09:59:36 +0000 UTC" firstStartedPulling="2026-02-16 09:59:37.712897061 +0000 UTC m=+835.406053241" lastFinishedPulling="2026-02-16 09:59:41.254282935 +0000 UTC m=+838.947439115" observedRunningTime="2026-02-16 09:59:41.658212577 +0000 UTC m=+839.351368767" watchObservedRunningTime="2026-02-16 09:59:41.660783209 +0000 UTC m=+839.353939389" Feb 16 09:59:42 crc kubenswrapper[4814]: I0216 09:59:42.660380 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lvh27" event={"ID":"4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1","Type":"ContainerStarted","Data":"cc89ada2f9e72f12ac9fc5f31c3591b82a9bf42abce2c3fac7299da31843d876"} Feb 16 09:59:42 crc kubenswrapper[4814]: I0216 09:59:42.661512 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:42 crc kubenswrapper[4814]: I0216 09:59:42.682159 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lvh27" podStartSLOduration=2.413166961 podStartE2EDuration="6.682084047s" podCreationTimestamp="2026-02-16 09:59:36 +0000 UTC" firstStartedPulling="2026-02-16 09:59:36.958796146 +0000 UTC m=+834.651952326" lastFinishedPulling="2026-02-16 09:59:41.227713232 +0000 UTC m=+838.920869412" observedRunningTime="2026-02-16 09:59:42.679985759 +0000 UTC m=+840.373141959" watchObservedRunningTime="2026-02-16 09:59:42.682084047 +0000 UTC m=+840.375240247" Feb 16 09:59:42 crc kubenswrapper[4814]: I0216 09:59:42.686814 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-nn582" podStartSLOduration=2.816715244 podStartE2EDuration="6.686794439s" podCreationTimestamp="2026-02-16 09:59:36 +0000 UTC" firstStartedPulling="2026-02-16 09:59:37.306602122 +0000 UTC m=+834.999758292" lastFinishedPulling="2026-02-16 09:59:41.176681307 +0000 UTC m=+838.869837487" observedRunningTime="2026-02-16 09:59:41.685062838 +0000 UTC m=+839.378219038" watchObservedRunningTime="2026-02-16 09:59:42.686794439 +0000 UTC m=+840.379950619" Feb 16 09:59:45 crc kubenswrapper[4814]: I0216 09:59:45.686715 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr" event={"ID":"6959ecae-2538-428c-956d-edf875e58947","Type":"ContainerStarted","Data":"2600ac6b41f1f64d4e4f4dfca1c1a02d8c0c4fdaa345ba8ae9e3f8c6c2644bc8"} Feb 16 09:59:45 crc kubenswrapper[4814]: I0216 09:59:45.717750 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c4tzr" podStartSLOduration=2.211999534 podStartE2EDuration="9.717724004s" podCreationTimestamp="2026-02-16 09:59:36 +0000 UTC" firstStartedPulling="2026-02-16 09:59:37.204073998 +0000 UTC m=+834.897230178" lastFinishedPulling="2026-02-16 09:59:44.709798468 +0000 UTC m=+842.402954648" observedRunningTime="2026-02-16 09:59:45.714307158 +0000 UTC m=+843.407463408" watchObservedRunningTime="2026-02-16 09:59:45.717724004 +0000 UTC m=+843.410880174" Feb 16 09:59:46 crc kubenswrapper[4814]: I0216 09:59:46.948647 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lvh27" Feb 16 09:59:47 crc kubenswrapper[4814]: I0216 09:59:47.344689 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:47 crc kubenswrapper[4814]: I0216 09:59:47.344794 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:47 crc kubenswrapper[4814]: I0216 09:59:47.350320 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:47 crc kubenswrapper[4814]: I0216 09:59:47.701221 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-78699459f5-mlv57" Feb 16 09:59:47 crc kubenswrapper[4814]: I0216 09:59:47.759688 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4xwqr"] Feb 16 09:59:57 crc kubenswrapper[4814]: I0216 09:59:57.481411 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-fbxdv" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.169601 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx"] Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.171490 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.175306 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx"] Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.175956 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.175995 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.303061 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-secret-volume\") pod \"collect-profiles-29520600-xhvwx\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.303135 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95b9g\" (UniqueName: \"kubernetes.io/projected/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-kube-api-access-95b9g\") pod \"collect-profiles-29520600-xhvwx\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.303229 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-config-volume\") pod \"collect-profiles-29520600-xhvwx\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.404248 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-secret-volume\") pod \"collect-profiles-29520600-xhvwx\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.404317 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95b9g\" (UniqueName: \"kubernetes.io/projected/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-kube-api-access-95b9g\") pod \"collect-profiles-29520600-xhvwx\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.404387 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-config-volume\") pod \"collect-profiles-29520600-xhvwx\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.405601 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-config-volume\") pod \"collect-profiles-29520600-xhvwx\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.418607 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-secret-volume\") pod \"collect-profiles-29520600-xhvwx\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.422988 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95b9g\" (UniqueName: \"kubernetes.io/projected/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-kube-api-access-95b9g\") pod \"collect-profiles-29520600-xhvwx\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.498911 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.733120 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx"] Feb 16 10:00:00 crc kubenswrapper[4814]: I0216 10:00:00.799032 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" event={"ID":"f6ae10c9-d249-4455-a8ba-1ceef545a1b9","Type":"ContainerStarted","Data":"679fed11eef4c73e47aa0f8427beed61e02aae023900d78ff2a53f56d41d4c67"} Feb 16 10:00:01 crc kubenswrapper[4814]: I0216 10:00:01.808829 4814 generic.go:334] "Generic (PLEG): container finished" podID="f6ae10c9-d249-4455-a8ba-1ceef545a1b9" containerID="543910e54dd85643b4dbd4de839b4134cd70692fd0accd647f989f7d744b024f" exitCode=0 Feb 16 10:00:01 crc kubenswrapper[4814]: I0216 10:00:01.808898 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" event={"ID":"f6ae10c9-d249-4455-a8ba-1ceef545a1b9","Type":"ContainerDied","Data":"543910e54dd85643b4dbd4de839b4134cd70692fd0accd647f989f7d744b024f"} Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.089454 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.156085 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95b9g\" (UniqueName: \"kubernetes.io/projected/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-kube-api-access-95b9g\") pod \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.156163 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-config-volume\") pod \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.156253 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-secret-volume\") pod \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\" (UID: \"f6ae10c9-d249-4455-a8ba-1ceef545a1b9\") " Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.158784 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-config-volume" (OuterVolumeSpecName: "config-volume") pod "f6ae10c9-d249-4455-a8ba-1ceef545a1b9" (UID: "f6ae10c9-d249-4455-a8ba-1ceef545a1b9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.164025 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-kube-api-access-95b9g" (OuterVolumeSpecName: "kube-api-access-95b9g") pod "f6ae10c9-d249-4455-a8ba-1ceef545a1b9" (UID: "f6ae10c9-d249-4455-a8ba-1ceef545a1b9"). InnerVolumeSpecName "kube-api-access-95b9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.168779 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f6ae10c9-d249-4455-a8ba-1ceef545a1b9" (UID: "f6ae10c9-d249-4455-a8ba-1ceef545a1b9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.258390 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.258443 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95b9g\" (UniqueName: \"kubernetes.io/projected/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-kube-api-access-95b9g\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.258459 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6ae10c9-d249-4455-a8ba-1ceef545a1b9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.829254 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" event={"ID":"f6ae10c9-d249-4455-a8ba-1ceef545a1b9","Type":"ContainerDied","Data":"679fed11eef4c73e47aa0f8427beed61e02aae023900d78ff2a53f56d41d4c67"} Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.829309 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679fed11eef4c73e47aa0f8427beed61e02aae023900d78ff2a53f56d41d4c67" Feb 16 10:00:03 crc kubenswrapper[4814]: I0216 10:00:03.829348 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx" Feb 16 10:00:11 crc kubenswrapper[4814]: I0216 10:00:11.770666 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj"] Feb 16 10:00:11 crc kubenswrapper[4814]: E0216 10:00:11.771973 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ae10c9-d249-4455-a8ba-1ceef545a1b9" containerName="collect-profiles" Feb 16 10:00:11 crc kubenswrapper[4814]: I0216 10:00:11.771992 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ae10c9-d249-4455-a8ba-1ceef545a1b9" containerName="collect-profiles" Feb 16 10:00:11 crc kubenswrapper[4814]: I0216 10:00:11.772145 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ae10c9-d249-4455-a8ba-1ceef545a1b9" containerName="collect-profiles" Feb 16 10:00:11 crc kubenswrapper[4814]: I0216 10:00:11.776664 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:11 crc kubenswrapper[4814]: I0216 10:00:11.781692 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 10:00:11 crc kubenswrapper[4814]: I0216 10:00:11.782247 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj"] Feb 16 10:00:11 crc kubenswrapper[4814]: I0216 10:00:11.919148 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:11 crc kubenswrapper[4814]: I0216 10:00:11.919221 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km5rq\" (UniqueName: \"kubernetes.io/projected/6a251c74-29fa-41ea-8f69-5cad14030a5f-kube-api-access-km5rq\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:11 crc kubenswrapper[4814]: I0216 10:00:11.919302 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.020408 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.020485 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km5rq\" (UniqueName: \"kubernetes.io/projected/6a251c74-29fa-41ea-8f69-5cad14030a5f-kube-api-access-km5rq\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.020576 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.021481 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.021550 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.046150 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km5rq\" (UniqueName: \"kubernetes.io/projected/6a251c74-29fa-41ea-8f69-5cad14030a5f-kube-api-access-km5rq\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.096894 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.351130 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj"] Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.818340 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4xwqr" podUID="13dde5e3-1577-420f-9b33-4d89a1a8749a" containerName="console" containerID="cri-o://76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6" gracePeriod=15 Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.919730 4814 generic.go:334] "Generic (PLEG): container finished" podID="6a251c74-29fa-41ea-8f69-5cad14030a5f" containerID="53eca4c29a16f86de2d5e30715652f1763b76a617ec0ce48f97f84e52fd6e26e" exitCode=0 Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.919796 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" event={"ID":"6a251c74-29fa-41ea-8f69-5cad14030a5f","Type":"ContainerDied","Data":"53eca4c29a16f86de2d5e30715652f1763b76a617ec0ce48f97f84e52fd6e26e"} Feb 16 10:00:12 crc kubenswrapper[4814]: I0216 10:00:12.919828 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" event={"ID":"6a251c74-29fa-41ea-8f69-5cad14030a5f","Type":"ContainerStarted","Data":"4b48924d48745ca0646151c91426da8d276e532846c86a18db36bf191b0a41e4"} Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.188517 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4xwqr_13dde5e3-1577-420f-9b33-4d89a1a8749a/console/0.log" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.189089 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.348730 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-service-ca\") pod \"13dde5e3-1577-420f-9b33-4d89a1a8749a\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.348820 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-oauth-serving-cert\") pod \"13dde5e3-1577-420f-9b33-4d89a1a8749a\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.348884 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-config\") pod \"13dde5e3-1577-420f-9b33-4d89a1a8749a\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.348929 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-oauth-config\") pod \"13dde5e3-1577-420f-9b33-4d89a1a8749a\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.349829 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "13dde5e3-1577-420f-9b33-4d89a1a8749a" (UID: "13dde5e3-1577-420f-9b33-4d89a1a8749a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.349856 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-config" (OuterVolumeSpecName: "console-config") pod "13dde5e3-1577-420f-9b33-4d89a1a8749a" (UID: "13dde5e3-1577-420f-9b33-4d89a1a8749a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.349906 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-trusted-ca-bundle\") pod \"13dde5e3-1577-420f-9b33-4d89a1a8749a\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.349938 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bs8z\" (UniqueName: \"kubernetes.io/projected/13dde5e3-1577-420f-9b33-4d89a1a8749a-kube-api-access-2bs8z\") pod \"13dde5e3-1577-420f-9b33-4d89a1a8749a\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.350284 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "13dde5e3-1577-420f-9b33-4d89a1a8749a" (UID: "13dde5e3-1577-420f-9b33-4d89a1a8749a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.350488 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-serving-cert\") pod \"13dde5e3-1577-420f-9b33-4d89a1a8749a\" (UID: \"13dde5e3-1577-420f-9b33-4d89a1a8749a\") " Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.350820 4814 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.350836 4814 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.350845 4814 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.351186 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-service-ca" (OuterVolumeSpecName: "service-ca") pod "13dde5e3-1577-420f-9b33-4d89a1a8749a" (UID: "13dde5e3-1577-420f-9b33-4d89a1a8749a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.358951 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "13dde5e3-1577-420f-9b33-4d89a1a8749a" (UID: "13dde5e3-1577-420f-9b33-4d89a1a8749a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.364921 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "13dde5e3-1577-420f-9b33-4d89a1a8749a" (UID: "13dde5e3-1577-420f-9b33-4d89a1a8749a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.365227 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13dde5e3-1577-420f-9b33-4d89a1a8749a-kube-api-access-2bs8z" (OuterVolumeSpecName: "kube-api-access-2bs8z") pod "13dde5e3-1577-420f-9b33-4d89a1a8749a" (UID: "13dde5e3-1577-420f-9b33-4d89a1a8749a"). InnerVolumeSpecName "kube-api-access-2bs8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.452675 4814 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.452716 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bs8z\" (UniqueName: \"kubernetes.io/projected/13dde5e3-1577-420f-9b33-4d89a1a8749a-kube-api-access-2bs8z\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.452735 4814 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/13dde5e3-1577-420f-9b33-4d89a1a8749a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.452749 4814 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/13dde5e3-1577-420f-9b33-4d89a1a8749a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.928281 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4xwqr_13dde5e3-1577-420f-9b33-4d89a1a8749a/console/0.log" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.928391 4814 generic.go:334] "Generic (PLEG): container finished" podID="13dde5e3-1577-420f-9b33-4d89a1a8749a" containerID="76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6" exitCode=2 Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.928483 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xwqr" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.928473 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xwqr" event={"ID":"13dde5e3-1577-420f-9b33-4d89a1a8749a","Type":"ContainerDied","Data":"76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6"} Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.928624 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xwqr" event={"ID":"13dde5e3-1577-420f-9b33-4d89a1a8749a","Type":"ContainerDied","Data":"2323eae6c14f13b0736236a8a88b2dd2c74d6c2a83c13e091f3b93b5aee30099"} Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.928660 4814 scope.go:117] "RemoveContainer" containerID="76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.970588 4814 scope.go:117] "RemoveContainer" containerID="76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6" Feb 16 10:00:13 crc kubenswrapper[4814]: E0216 10:00:13.975516 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6\": container with ID starting with 76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6 not found: ID does not exist" containerID="76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.975600 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6"} err="failed to get container status \"76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6\": rpc error: code = NotFound desc = could not find container \"76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6\": container with ID starting with 76456f561d8082f77b62755ccb5ea0e6e8d408f684c2d793b07cfdcda474d9a6 not found: ID does not exist" Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.984520 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4xwqr"] Feb 16 10:00:13 crc kubenswrapper[4814]: I0216 10:00:13.995957 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4xwqr"] Feb 16 10:00:14 crc kubenswrapper[4814]: I0216 10:00:14.936599 4814 generic.go:334] "Generic (PLEG): container finished" podID="6a251c74-29fa-41ea-8f69-5cad14030a5f" containerID="d1c552eaba4d454ddd6d936fb16c6c1f1eafc065a3ae057ce778950c18825b93" exitCode=0 Feb 16 10:00:14 crc kubenswrapper[4814]: I0216 10:00:14.936664 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" event={"ID":"6a251c74-29fa-41ea-8f69-5cad14030a5f","Type":"ContainerDied","Data":"d1c552eaba4d454ddd6d936fb16c6c1f1eafc065a3ae057ce778950c18825b93"} Feb 16 10:00:15 crc kubenswrapper[4814]: I0216 10:00:15.002353 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13dde5e3-1577-420f-9b33-4d89a1a8749a" path="/var/lib/kubelet/pods/13dde5e3-1577-420f-9b33-4d89a1a8749a/volumes" Feb 16 10:00:15 crc kubenswrapper[4814]: I0216 10:00:15.948199 4814 generic.go:334] "Generic (PLEG): container finished" podID="6a251c74-29fa-41ea-8f69-5cad14030a5f" containerID="6853fecf161cbae7f58cfb3a25412c0e60e462add40190c48019118467e6a0c3" exitCode=0 Feb 16 10:00:15 crc kubenswrapper[4814]: I0216 10:00:15.948322 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" event={"ID":"6a251c74-29fa-41ea-8f69-5cad14030a5f","Type":"ContainerDied","Data":"6853fecf161cbae7f58cfb3a25412c0e60e462add40190c48019118467e6a0c3"} Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.350862 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.516168 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-bundle\") pod \"6a251c74-29fa-41ea-8f69-5cad14030a5f\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.516275 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km5rq\" (UniqueName: \"kubernetes.io/projected/6a251c74-29fa-41ea-8f69-5cad14030a5f-kube-api-access-km5rq\") pod \"6a251c74-29fa-41ea-8f69-5cad14030a5f\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.516387 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-util\") pod \"6a251c74-29fa-41ea-8f69-5cad14030a5f\" (UID: \"6a251c74-29fa-41ea-8f69-5cad14030a5f\") " Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.517971 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-bundle" (OuterVolumeSpecName: "bundle") pod "6a251c74-29fa-41ea-8f69-5cad14030a5f" (UID: "6a251c74-29fa-41ea-8f69-5cad14030a5f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.523703 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a251c74-29fa-41ea-8f69-5cad14030a5f-kube-api-access-km5rq" (OuterVolumeSpecName: "kube-api-access-km5rq") pod "6a251c74-29fa-41ea-8f69-5cad14030a5f" (UID: "6a251c74-29fa-41ea-8f69-5cad14030a5f"). InnerVolumeSpecName "kube-api-access-km5rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.525178 4814 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.525238 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km5rq\" (UniqueName: \"kubernetes.io/projected/6a251c74-29fa-41ea-8f69-5cad14030a5f-kube-api-access-km5rq\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.533976 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-util" (OuterVolumeSpecName: "util") pod "6a251c74-29fa-41ea-8f69-5cad14030a5f" (UID: "6a251c74-29fa-41ea-8f69-5cad14030a5f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.626310 4814 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a251c74-29fa-41ea-8f69-5cad14030a5f-util\") on node \"crc\" DevicePath \"\"" Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.963505 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" event={"ID":"6a251c74-29fa-41ea-8f69-5cad14030a5f","Type":"ContainerDied","Data":"4b48924d48745ca0646151c91426da8d276e532846c86a18db36bf191b0a41e4"} Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.963608 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b48924d48745ca0646151c91426da8d276e532846c86a18db36bf191b0a41e4" Feb 16 10:00:17 crc kubenswrapper[4814]: I0216 10:00:17.964012 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.917251 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r"] Feb 16 10:00:26 crc kubenswrapper[4814]: E0216 10:00:26.918443 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a251c74-29fa-41ea-8f69-5cad14030a5f" containerName="extract" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.918461 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a251c74-29fa-41ea-8f69-5cad14030a5f" containerName="extract" Feb 16 10:00:26 crc kubenswrapper[4814]: E0216 10:00:26.918477 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a251c74-29fa-41ea-8f69-5cad14030a5f" containerName="util" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.918485 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a251c74-29fa-41ea-8f69-5cad14030a5f" containerName="util" Feb 16 10:00:26 crc kubenswrapper[4814]: E0216 10:00:26.918504 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13dde5e3-1577-420f-9b33-4d89a1a8749a" containerName="console" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.918513 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="13dde5e3-1577-420f-9b33-4d89a1a8749a" containerName="console" Feb 16 10:00:26 crc kubenswrapper[4814]: E0216 10:00:26.918561 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a251c74-29fa-41ea-8f69-5cad14030a5f" containerName="pull" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.918569 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a251c74-29fa-41ea-8f69-5cad14030a5f" containerName="pull" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.918674 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a251c74-29fa-41ea-8f69-5cad14030a5f" containerName="extract" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.918683 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="13dde5e3-1577-420f-9b33-4d89a1a8749a" containerName="console" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.919235 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.930930 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4d5rj" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.931175 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.931325 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.934629 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.945714 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.969322 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r"] Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.977815 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81-webhook-cert\") pod \"metallb-operator-controller-manager-6f6878f94-mgj6r\" (UID: \"4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81\") " pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.977913 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8vgz\" (UniqueName: \"kubernetes.io/projected/4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81-kube-api-access-d8vgz\") pod \"metallb-operator-controller-manager-6f6878f94-mgj6r\" (UID: \"4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81\") " pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:26 crc kubenswrapper[4814]: I0216 10:00:26.978014 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81-apiservice-cert\") pod \"metallb-operator-controller-manager-6f6878f94-mgj6r\" (UID: \"4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81\") " pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.079511 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8vgz\" (UniqueName: \"kubernetes.io/projected/4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81-kube-api-access-d8vgz\") pod \"metallb-operator-controller-manager-6f6878f94-mgj6r\" (UID: \"4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81\") " pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.079644 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81-apiservice-cert\") pod \"metallb-operator-controller-manager-6f6878f94-mgj6r\" (UID: \"4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81\") " pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.079679 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81-webhook-cert\") pod \"metallb-operator-controller-manager-6f6878f94-mgj6r\" (UID: \"4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81\") " pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.102729 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81-apiservice-cert\") pod \"metallb-operator-controller-manager-6f6878f94-mgj6r\" (UID: \"4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81\") " pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.112267 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81-webhook-cert\") pod \"metallb-operator-controller-manager-6f6878f94-mgj6r\" (UID: \"4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81\") " pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.124669 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8vgz\" (UniqueName: \"kubernetes.io/projected/4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81-kube-api-access-d8vgz\") pod \"metallb-operator-controller-manager-6f6878f94-mgj6r\" (UID: \"4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81\") " pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.256490 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.474771 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf"] Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.475723 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.480697 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5w2jc" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.480836 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.480921 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.558648 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a1b6a4d-7919-4cd7-bb65-bca5b645379f-apiservice-cert\") pod \"metallb-operator-webhook-server-6f6c7df7cb-8rpbf\" (UID: \"6a1b6a4d-7919-4cd7-bb65-bca5b645379f\") " pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.559150 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddnsd\" (UniqueName: \"kubernetes.io/projected/6a1b6a4d-7919-4cd7-bb65-bca5b645379f-kube-api-access-ddnsd\") pod \"metallb-operator-webhook-server-6f6c7df7cb-8rpbf\" (UID: \"6a1b6a4d-7919-4cd7-bb65-bca5b645379f\") " pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.559194 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a1b6a4d-7919-4cd7-bb65-bca5b645379f-webhook-cert\") pod \"metallb-operator-webhook-server-6f6c7df7cb-8rpbf\" (UID: \"6a1b6a4d-7919-4cd7-bb65-bca5b645379f\") " pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.563909 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf"] Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.678787 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a1b6a4d-7919-4cd7-bb65-bca5b645379f-apiservice-cert\") pod \"metallb-operator-webhook-server-6f6c7df7cb-8rpbf\" (UID: \"6a1b6a4d-7919-4cd7-bb65-bca5b645379f\") " pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.678888 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddnsd\" (UniqueName: \"kubernetes.io/projected/6a1b6a4d-7919-4cd7-bb65-bca5b645379f-kube-api-access-ddnsd\") pod \"metallb-operator-webhook-server-6f6c7df7cb-8rpbf\" (UID: \"6a1b6a4d-7919-4cd7-bb65-bca5b645379f\") " pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.678926 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a1b6a4d-7919-4cd7-bb65-bca5b645379f-webhook-cert\") pod \"metallb-operator-webhook-server-6f6c7df7cb-8rpbf\" (UID: \"6a1b6a4d-7919-4cd7-bb65-bca5b645379f\") " pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.711151 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6a1b6a4d-7919-4cd7-bb65-bca5b645379f-webhook-cert\") pod \"metallb-operator-webhook-server-6f6c7df7cb-8rpbf\" (UID: \"6a1b6a4d-7919-4cd7-bb65-bca5b645379f\") " pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.717036 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddnsd\" (UniqueName: \"kubernetes.io/projected/6a1b6a4d-7919-4cd7-bb65-bca5b645379f-kube-api-access-ddnsd\") pod \"metallb-operator-webhook-server-6f6c7df7cb-8rpbf\" (UID: \"6a1b6a4d-7919-4cd7-bb65-bca5b645379f\") " pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.719481 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6a1b6a4d-7919-4cd7-bb65-bca5b645379f-apiservice-cert\") pod \"metallb-operator-webhook-server-6f6c7df7cb-8rpbf\" (UID: \"6a1b6a4d-7919-4cd7-bb65-bca5b645379f\") " pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.796873 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:27 crc kubenswrapper[4814]: I0216 10:00:27.826110 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r"] Feb 16 10:00:27 crc kubenswrapper[4814]: W0216 10:00:27.834896 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f9ef0e1_d42f_4c53_b61f_ac0fc2bcea81.slice/crio-87127b72e9befc248110e90a987f59bdc64b913d40310c164dc50601e43bf2e0 WatchSource:0}: Error finding container 87127b72e9befc248110e90a987f59bdc64b913d40310c164dc50601e43bf2e0: Status 404 returned error can't find the container with id 87127b72e9befc248110e90a987f59bdc64b913d40310c164dc50601e43bf2e0 Feb 16 10:00:28 crc kubenswrapper[4814]: I0216 10:00:28.041217 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" event={"ID":"4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81","Type":"ContainerStarted","Data":"87127b72e9befc248110e90a987f59bdc64b913d40310c164dc50601e43bf2e0"} Feb 16 10:00:28 crc kubenswrapper[4814]: I0216 10:00:28.146490 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf"] Feb 16 10:00:28 crc kubenswrapper[4814]: W0216 10:00:28.150797 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a1b6a4d_7919_4cd7_bb65_bca5b645379f.slice/crio-d662eccfc5d27fc4a227ece3703f43d53b2007a94786ad32136550b7cf9ea66e WatchSource:0}: Error finding container d662eccfc5d27fc4a227ece3703f43d53b2007a94786ad32136550b7cf9ea66e: Status 404 returned error can't find the container with id d662eccfc5d27fc4a227ece3703f43d53b2007a94786ad32136550b7cf9ea66e Feb 16 10:00:29 crc kubenswrapper[4814]: I0216 10:00:29.050171 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" event={"ID":"6a1b6a4d-7919-4cd7-bb65-bca5b645379f","Type":"ContainerStarted","Data":"d662eccfc5d27fc4a227ece3703f43d53b2007a94786ad32136550b7cf9ea66e"} Feb 16 10:00:35 crc kubenswrapper[4814]: I0216 10:00:35.111954 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" event={"ID":"6a1b6a4d-7919-4cd7-bb65-bca5b645379f","Type":"ContainerStarted","Data":"b6a52427676ead34ba3dea458b13901553d6bdc219657fecb6f61d7abc72e2ff"} Feb 16 10:00:35 crc kubenswrapper[4814]: I0216 10:00:35.113085 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:35 crc kubenswrapper[4814]: I0216 10:00:35.114312 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" event={"ID":"4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81","Type":"ContainerStarted","Data":"dad06e3e1b17648cc7c1da230c64a89043988fc8cf575a057323aca8eac7dba7"} Feb 16 10:00:35 crc kubenswrapper[4814]: I0216 10:00:35.114731 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:00:35 crc kubenswrapper[4814]: I0216 10:00:35.138150 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" podStartSLOduration=2.093859379 podStartE2EDuration="8.138123625s" podCreationTimestamp="2026-02-16 10:00:27 +0000 UTC" firstStartedPulling="2026-02-16 10:00:28.155130716 +0000 UTC m=+885.848286896" lastFinishedPulling="2026-02-16 10:00:34.199394962 +0000 UTC m=+891.892551142" observedRunningTime="2026-02-16 10:00:35.131947911 +0000 UTC m=+892.825104101" watchObservedRunningTime="2026-02-16 10:00:35.138123625 +0000 UTC m=+892.831279805" Feb 16 10:00:47 crc kubenswrapper[4814]: I0216 10:00:47.805128 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6f6c7df7cb-8rpbf" Feb 16 10:00:47 crc kubenswrapper[4814]: I0216 10:00:47.831833 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" podStartSLOduration=15.568819068 podStartE2EDuration="21.831800484s" podCreationTimestamp="2026-02-16 10:00:26 +0000 UTC" firstStartedPulling="2026-02-16 10:00:27.8391698 +0000 UTC m=+885.532325970" lastFinishedPulling="2026-02-16 10:00:34.102151216 +0000 UTC m=+891.795307386" observedRunningTime="2026-02-16 10:00:35.168335708 +0000 UTC m=+892.861491888" watchObservedRunningTime="2026-02-16 10:00:47.831800484 +0000 UTC m=+905.524956664" Feb 16 10:01:07 crc kubenswrapper[4814]: I0216 10:01:07.260658 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6f6878f94-mgj6r" Feb 16 10:01:07 crc kubenswrapper[4814]: I0216 10:01:07.979315 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-b965k"] Feb 16 10:01:07 crc kubenswrapper[4814]: I0216 10:01:07.982485 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:07 crc kubenswrapper[4814]: I0216 10:01:07.989132 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 10:01:07 crc kubenswrapper[4814]: I0216 10:01:07.989180 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 10:01:07 crc kubenswrapper[4814]: I0216 10:01:07.994261 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lk7gm" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.013014 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg"] Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.014241 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.016807 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.033244 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg"] Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.065444 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgc9m\" (UniqueName: \"kubernetes.io/projected/5b42fe8a-c4e7-48ca-97a1-6739547d284f-kube-api-access-wgc9m\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.065501 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv8ct\" (UniqueName: \"kubernetes.io/projected/3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573-kube-api-access-qv8ct\") pod \"frr-k8s-webhook-server-78b44bf5bb-b86zg\" (UID: \"3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.065526 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-frr-sockets\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.065597 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b42fe8a-c4e7-48ca-97a1-6739547d284f-metrics-certs\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.065743 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-b86zg\" (UID: \"3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.065958 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-metrics\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.066022 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-frr-conf\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.066104 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-reloader\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.066144 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5b42fe8a-c4e7-48ca-97a1-6739547d284f-frr-startup\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.112385 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-55g6q"] Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.113498 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.116784 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.116812 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.116851 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vxkhx" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.116812 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.134669 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-4vfps"] Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.135851 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.140461 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.157455 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-4vfps"] Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168428 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-b86zg\" (UID: \"3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168508 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sklt9\" (UniqueName: \"kubernetes.io/projected/d901565c-c77f-4940-aa1c-bc148ed6cb2b-kube-api-access-sklt9\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168595 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-metrics\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168631 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-frr-conf\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168665 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-reloader\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168687 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5b42fe8a-c4e7-48ca-97a1-6739547d284f-frr-startup\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168720 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-memberlist\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168745 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-metrics-certs\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168771 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d901565c-c77f-4940-aa1c-bc148ed6cb2b-metallb-excludel2\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168798 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgc9m\" (UniqueName: \"kubernetes.io/projected/5b42fe8a-c4e7-48ca-97a1-6739547d284f-kube-api-access-wgc9m\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168842 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv8ct\" (UniqueName: \"kubernetes.io/projected/3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573-kube-api-access-qv8ct\") pod \"frr-k8s-webhook-server-78b44bf5bb-b86zg\" (UID: \"3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168871 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-frr-sockets\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.168913 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b42fe8a-c4e7-48ca-97a1-6739547d284f-metrics-certs\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: E0216 10:01:08.169106 4814 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 16 10:01:08 crc kubenswrapper[4814]: E0216 10:01:08.169189 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b42fe8a-c4e7-48ca-97a1-6739547d284f-metrics-certs podName:5b42fe8a-c4e7-48ca-97a1-6739547d284f nodeName:}" failed. No retries permitted until 2026-02-16 10:01:08.669160813 +0000 UTC m=+926.362316993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b42fe8a-c4e7-48ca-97a1-6739547d284f-metrics-certs") pod "frr-k8s-b965k" (UID: "5b42fe8a-c4e7-48ca-97a1-6739547d284f") : secret "frr-k8s-certs-secret" not found Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.170370 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-metrics\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.170736 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-frr-conf\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.171067 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-reloader\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.172122 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5b42fe8a-c4e7-48ca-97a1-6739547d284f-frr-startup\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.173274 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5b42fe8a-c4e7-48ca-97a1-6739547d284f-frr-sockets\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.178407 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-b86zg\" (UID: \"3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.199140 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgc9m\" (UniqueName: \"kubernetes.io/projected/5b42fe8a-c4e7-48ca-97a1-6739547d284f-kube-api-access-wgc9m\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.201503 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv8ct\" (UniqueName: \"kubernetes.io/projected/3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573-kube-api-access-qv8ct\") pod \"frr-k8s-webhook-server-78b44bf5bb-b86zg\" (UID: \"3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.271105 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs5td\" (UniqueName: \"kubernetes.io/projected/3e127231-de8b-4ee9-9bae-8cefb19310a0-kube-api-access-cs5td\") pod \"controller-69bbfbf88f-4vfps\" (UID: \"3e127231-de8b-4ee9-9bae-8cefb19310a0\") " pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.271204 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e127231-de8b-4ee9-9bae-8cefb19310a0-cert\") pod \"controller-69bbfbf88f-4vfps\" (UID: \"3e127231-de8b-4ee9-9bae-8cefb19310a0\") " pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.271255 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sklt9\" (UniqueName: \"kubernetes.io/projected/d901565c-c77f-4940-aa1c-bc148ed6cb2b-kube-api-access-sklt9\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.271398 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e127231-de8b-4ee9-9bae-8cefb19310a0-metrics-certs\") pod \"controller-69bbfbf88f-4vfps\" (UID: \"3e127231-de8b-4ee9-9bae-8cefb19310a0\") " pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.271454 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-memberlist\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.271482 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-metrics-certs\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.271512 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d901565c-c77f-4940-aa1c-bc148ed6cb2b-metallb-excludel2\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: E0216 10:01:08.271692 4814 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 10:01:08 crc kubenswrapper[4814]: E0216 10:01:08.271793 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-memberlist podName:d901565c-c77f-4940-aa1c-bc148ed6cb2b nodeName:}" failed. No retries permitted until 2026-02-16 10:01:08.771768775 +0000 UTC m=+926.464924955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-memberlist") pod "speaker-55g6q" (UID: "d901565c-c77f-4940-aa1c-bc148ed6cb2b") : secret "metallb-memberlist" not found Feb 16 10:01:08 crc kubenswrapper[4814]: E0216 10:01:08.271782 4814 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 16 10:01:08 crc kubenswrapper[4814]: E0216 10:01:08.271925 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-metrics-certs podName:d901565c-c77f-4940-aa1c-bc148ed6cb2b nodeName:}" failed. No retries permitted until 2026-02-16 10:01:08.771890928 +0000 UTC m=+926.465047308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-metrics-certs") pod "speaker-55g6q" (UID: "d901565c-c77f-4940-aa1c-bc148ed6cb2b") : secret "speaker-certs-secret" not found Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.272579 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d901565c-c77f-4940-aa1c-bc148ed6cb2b-metallb-excludel2\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.296219 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sklt9\" (UniqueName: \"kubernetes.io/projected/d901565c-c77f-4940-aa1c-bc148ed6cb2b-kube-api-access-sklt9\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.330093 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.373574 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e127231-de8b-4ee9-9bae-8cefb19310a0-metrics-certs\") pod \"controller-69bbfbf88f-4vfps\" (UID: \"3e127231-de8b-4ee9-9bae-8cefb19310a0\") " pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.374183 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs5td\" (UniqueName: \"kubernetes.io/projected/3e127231-de8b-4ee9-9bae-8cefb19310a0-kube-api-access-cs5td\") pod \"controller-69bbfbf88f-4vfps\" (UID: \"3e127231-de8b-4ee9-9bae-8cefb19310a0\") " pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.374219 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e127231-de8b-4ee9-9bae-8cefb19310a0-cert\") pod \"controller-69bbfbf88f-4vfps\" (UID: \"3e127231-de8b-4ee9-9bae-8cefb19310a0\") " pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.377698 4814 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.379319 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3e127231-de8b-4ee9-9bae-8cefb19310a0-metrics-certs\") pod \"controller-69bbfbf88f-4vfps\" (UID: \"3e127231-de8b-4ee9-9bae-8cefb19310a0\") " pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.389005 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e127231-de8b-4ee9-9bae-8cefb19310a0-cert\") pod \"controller-69bbfbf88f-4vfps\" (UID: \"3e127231-de8b-4ee9-9bae-8cefb19310a0\") " pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.398757 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs5td\" (UniqueName: \"kubernetes.io/projected/3e127231-de8b-4ee9-9bae-8cefb19310a0-kube-api-access-cs5td\") pod \"controller-69bbfbf88f-4vfps\" (UID: \"3e127231-de8b-4ee9-9bae-8cefb19310a0\") " pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.450053 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.679480 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b42fe8a-c4e7-48ca-97a1-6739547d284f-metrics-certs\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.685858 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b42fe8a-c4e7-48ca-97a1-6739547d284f-metrics-certs\") pod \"frr-k8s-b965k\" (UID: \"5b42fe8a-c4e7-48ca-97a1-6739547d284f\") " pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.781415 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-memberlist\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.781471 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-metrics-certs\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: E0216 10:01:08.781707 4814 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 10:01:08 crc kubenswrapper[4814]: E0216 10:01:08.781964 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-memberlist podName:d901565c-c77f-4940-aa1c-bc148ed6cb2b nodeName:}" failed. No retries permitted until 2026-02-16 10:01:09.78193237 +0000 UTC m=+927.475088550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-memberlist") pod "speaker-55g6q" (UID: "d901565c-c77f-4940-aa1c-bc148ed6cb2b") : secret "metallb-memberlist" not found Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.786577 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-metrics-certs\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.899387 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg"] Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.904209 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:08 crc kubenswrapper[4814]: W0216 10:01:08.915852 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a69ba3b_0b8a_4c6c_93c1_edfdd29e2573.slice/crio-2a047084930ee7884b38b0f7f2c6e7249b8e39a03bae5fbd5380907725559b5c WatchSource:0}: Error finding container 2a047084930ee7884b38b0f7f2c6e7249b8e39a03bae5fbd5380907725559b5c: Status 404 returned error can't find the container with id 2a047084930ee7884b38b0f7f2c6e7249b8e39a03bae5fbd5380907725559b5c Feb 16 10:01:08 crc kubenswrapper[4814]: I0216 10:01:08.990743 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-4vfps"] Feb 16 10:01:09 crc kubenswrapper[4814]: W0216 10:01:09.009820 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e127231_de8b_4ee9_9bae_8cefb19310a0.slice/crio-258ac2e74197f4c5995476436886b215850a8306ac09e5f5e7e5cc7efb9fcf2a WatchSource:0}: Error finding container 258ac2e74197f4c5995476436886b215850a8306ac09e5f5e7e5cc7efb9fcf2a: Status 404 returned error can't find the container with id 258ac2e74197f4c5995476436886b215850a8306ac09e5f5e7e5cc7efb9fcf2a Feb 16 10:01:09 crc kubenswrapper[4814]: I0216 10:01:09.354514 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" event={"ID":"3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573","Type":"ContainerStarted","Data":"2a047084930ee7884b38b0f7f2c6e7249b8e39a03bae5fbd5380907725559b5c"} Feb 16 10:01:09 crc kubenswrapper[4814]: I0216 10:01:09.356075 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b965k" event={"ID":"5b42fe8a-c4e7-48ca-97a1-6739547d284f","Type":"ContainerStarted","Data":"229e1afb477f13fe0b53cba7a651379d8d6426da4d6a77c06c1f46ae838ec37c"} Feb 16 10:01:09 crc kubenswrapper[4814]: I0216 10:01:09.358086 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-4vfps" event={"ID":"3e127231-de8b-4ee9-9bae-8cefb19310a0","Type":"ContainerStarted","Data":"594a896d14e803514331378286d16f8d23c840568dd5ad383614963db51c9734"} Feb 16 10:01:09 crc kubenswrapper[4814]: I0216 10:01:09.358153 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-4vfps" event={"ID":"3e127231-de8b-4ee9-9bae-8cefb19310a0","Type":"ContainerStarted","Data":"ed2c569aed5c193344122fcb259ec52fc751aecd552f5837655b7f0cc4cb38ec"} Feb 16 10:01:09 crc kubenswrapper[4814]: I0216 10:01:09.358163 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-4vfps" event={"ID":"3e127231-de8b-4ee9-9bae-8cefb19310a0","Type":"ContainerStarted","Data":"258ac2e74197f4c5995476436886b215850a8306ac09e5f5e7e5cc7efb9fcf2a"} Feb 16 10:01:09 crc kubenswrapper[4814]: I0216 10:01:09.358203 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:09 crc kubenswrapper[4814]: I0216 10:01:09.377393 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-4vfps" podStartSLOduration=1.377362607 podStartE2EDuration="1.377362607s" podCreationTimestamp="2026-02-16 10:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:01:09.376678878 +0000 UTC m=+927.069835088" watchObservedRunningTime="2026-02-16 10:01:09.377362607 +0000 UTC m=+927.070518787" Feb 16 10:01:09 crc kubenswrapper[4814]: I0216 10:01:09.801037 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-memberlist\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:09 crc kubenswrapper[4814]: I0216 10:01:09.809275 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d901565c-c77f-4940-aa1c-bc148ed6cb2b-memberlist\") pod \"speaker-55g6q\" (UID: \"d901565c-c77f-4940-aa1c-bc148ed6cb2b\") " pod="metallb-system/speaker-55g6q" Feb 16 10:01:09 crc kubenswrapper[4814]: I0216 10:01:09.926486 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-55g6q" Feb 16 10:01:09 crc kubenswrapper[4814]: W0216 10:01:09.950751 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd901565c_c77f_4940_aa1c_bc148ed6cb2b.slice/crio-e92fba9379a7d78c996bdae29a543d71e6058228c3bb37b77443a948016074d0 WatchSource:0}: Error finding container e92fba9379a7d78c996bdae29a543d71e6058228c3bb37b77443a948016074d0: Status 404 returned error can't find the container with id e92fba9379a7d78c996bdae29a543d71e6058228c3bb37b77443a948016074d0 Feb 16 10:01:10 crc kubenswrapper[4814]: I0216 10:01:10.366107 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-55g6q" event={"ID":"d901565c-c77f-4940-aa1c-bc148ed6cb2b","Type":"ContainerStarted","Data":"4a95aaf71ac190c4303c8ab763d29cb54c810032bb968793017805f18cdcf5ef"} Feb 16 10:01:10 crc kubenswrapper[4814]: I0216 10:01:10.367462 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-55g6q" event={"ID":"d901565c-c77f-4940-aa1c-bc148ed6cb2b","Type":"ContainerStarted","Data":"e92fba9379a7d78c996bdae29a543d71e6058228c3bb37b77443a948016074d0"} Feb 16 10:01:11 crc kubenswrapper[4814]: I0216 10:01:11.407384 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-55g6q" event={"ID":"d901565c-c77f-4940-aa1c-bc148ed6cb2b","Type":"ContainerStarted","Data":"7aa426cdf084d3dada7938a2771112ed7ee0b4961af8d081e36bd500be120dc2"} Feb 16 10:01:11 crc kubenswrapper[4814]: I0216 10:01:11.407595 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-55g6q" Feb 16 10:01:11 crc kubenswrapper[4814]: I0216 10:01:11.446304 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-55g6q" podStartSLOduration=3.446282632 podStartE2EDuration="3.446282632s" podCreationTimestamp="2026-02-16 10:01:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:01:11.442393653 +0000 UTC m=+929.135549843" watchObservedRunningTime="2026-02-16 10:01:11.446282632 +0000 UTC m=+929.139438812" Feb 16 10:01:18 crc kubenswrapper[4814]: I0216 10:01:18.502679 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" event={"ID":"3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573","Type":"ContainerStarted","Data":"6451aa42448e896f58d62b5599aaed6e147b1c2771f99bfea93f80a2ed27e294"} Feb 16 10:01:18 crc kubenswrapper[4814]: I0216 10:01:18.503572 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" Feb 16 10:01:18 crc kubenswrapper[4814]: I0216 10:01:18.505168 4814 generic.go:334] "Generic (PLEG): container finished" podID="5b42fe8a-c4e7-48ca-97a1-6739547d284f" containerID="c81682fee183762f0b27c60cfb7639dc34a03aa280ac97be96dd73790dd64dd1" exitCode=0 Feb 16 10:01:18 crc kubenswrapper[4814]: I0216 10:01:18.505230 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b965k" event={"ID":"5b42fe8a-c4e7-48ca-97a1-6739547d284f","Type":"ContainerDied","Data":"c81682fee183762f0b27c60cfb7639dc34a03aa280ac97be96dd73790dd64dd1"} Feb 16 10:01:18 crc kubenswrapper[4814]: I0216 10:01:18.530461 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" podStartSLOduration=2.9838862170000002 podStartE2EDuration="11.530433762s" podCreationTimestamp="2026-02-16 10:01:07 +0000 UTC" firstStartedPulling="2026-02-16 10:01:08.919805114 +0000 UTC m=+926.612961294" lastFinishedPulling="2026-02-16 10:01:17.466352659 +0000 UTC m=+935.159508839" observedRunningTime="2026-02-16 10:01:18.524712601 +0000 UTC m=+936.217868831" watchObservedRunningTime="2026-02-16 10:01:18.530433762 +0000 UTC m=+936.223589952" Feb 16 10:01:19 crc kubenswrapper[4814]: I0216 10:01:19.516046 4814 generic.go:334] "Generic (PLEG): container finished" podID="5b42fe8a-c4e7-48ca-97a1-6739547d284f" containerID="857fd858f5ffd16bc034f46c69d7c07a1d2c60fede02ac88baa49007f84b8441" exitCode=0 Feb 16 10:01:19 crc kubenswrapper[4814]: I0216 10:01:19.516160 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b965k" event={"ID":"5b42fe8a-c4e7-48ca-97a1-6739547d284f","Type":"ContainerDied","Data":"857fd858f5ffd16bc034f46c69d7c07a1d2c60fede02ac88baa49007f84b8441"} Feb 16 10:01:20 crc kubenswrapper[4814]: I0216 10:01:20.526666 4814 generic.go:334] "Generic (PLEG): container finished" podID="5b42fe8a-c4e7-48ca-97a1-6739547d284f" containerID="e325876d862e51222251b00d02d0730305c95250d1c4f58565ef519767369c5c" exitCode=0 Feb 16 10:01:20 crc kubenswrapper[4814]: I0216 10:01:20.526774 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b965k" event={"ID":"5b42fe8a-c4e7-48ca-97a1-6739547d284f","Type":"ContainerDied","Data":"e325876d862e51222251b00d02d0730305c95250d1c4f58565ef519767369c5c"} Feb 16 10:01:21 crc kubenswrapper[4814]: I0216 10:01:21.542336 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b965k" event={"ID":"5b42fe8a-c4e7-48ca-97a1-6739547d284f","Type":"ContainerStarted","Data":"f90190c978ba948d93741e9e555c12f77f6cd9d14813d1f64c311bcc757d4004"} Feb 16 10:01:21 crc kubenswrapper[4814]: I0216 10:01:21.542890 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b965k" event={"ID":"5b42fe8a-c4e7-48ca-97a1-6739547d284f","Type":"ContainerStarted","Data":"1355835c35defa0d95d5525cb4895e609ee4195c5110728b8a47dc308f8bd4a2"} Feb 16 10:01:21 crc kubenswrapper[4814]: I0216 10:01:21.542903 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b965k" event={"ID":"5b42fe8a-c4e7-48ca-97a1-6739547d284f","Type":"ContainerStarted","Data":"999b1ef5add8dba1ca8e5befc9c3103e4e7842723b909bb526f44cb787e6cbd3"} Feb 16 10:01:21 crc kubenswrapper[4814]: I0216 10:01:21.542915 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b965k" event={"ID":"5b42fe8a-c4e7-48ca-97a1-6739547d284f","Type":"ContainerStarted","Data":"2efa10e729ebe157925958ce56cd6d86ff23980349f20c9b8cd7fbc2f294e66d"} Feb 16 10:01:21 crc kubenswrapper[4814]: I0216 10:01:21.542926 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b965k" event={"ID":"5b42fe8a-c4e7-48ca-97a1-6739547d284f","Type":"ContainerStarted","Data":"0b7070b7bd552e932ee0fe943b69129d69d1a3bcd6a595f3d42becd62d188b56"} Feb 16 10:01:22 crc kubenswrapper[4814]: I0216 10:01:22.555078 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b965k" event={"ID":"5b42fe8a-c4e7-48ca-97a1-6739547d284f","Type":"ContainerStarted","Data":"110a2bafa03d3f1e503c9e86c1278c1f22fd2604b3f9c76b7ce739526c134c97"} Feb 16 10:01:22 crc kubenswrapper[4814]: I0216 10:01:22.555671 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:22 crc kubenswrapper[4814]: I0216 10:01:22.585019 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-b965k" podStartSLOduration=7.241222107 podStartE2EDuration="15.584991999s" podCreationTimestamp="2026-02-16 10:01:07 +0000 UTC" firstStartedPulling="2026-02-16 10:01:09.126524829 +0000 UTC m=+926.819681009" lastFinishedPulling="2026-02-16 10:01:17.470294721 +0000 UTC m=+935.163450901" observedRunningTime="2026-02-16 10:01:22.583164158 +0000 UTC m=+940.276320358" watchObservedRunningTime="2026-02-16 10:01:22.584991999 +0000 UTC m=+940.278148179" Feb 16 10:01:23 crc kubenswrapper[4814]: I0216 10:01:23.905414 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:24 crc kubenswrapper[4814]: I0216 10:01:24.003806 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:28 crc kubenswrapper[4814]: I0216 10:01:28.335744 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-b86zg" Feb 16 10:01:28 crc kubenswrapper[4814]: I0216 10:01:28.454924 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-4vfps" Feb 16 10:01:29 crc kubenswrapper[4814]: I0216 10:01:29.932653 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-55g6q" Feb 16 10:01:32 crc kubenswrapper[4814]: I0216 10:01:32.738442 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-42wvd"] Feb 16 10:01:32 crc kubenswrapper[4814]: I0216 10:01:32.739917 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-42wvd" Feb 16 10:01:32 crc kubenswrapper[4814]: I0216 10:01:32.742779 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-fsvkn" Feb 16 10:01:32 crc kubenswrapper[4814]: I0216 10:01:32.744020 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 10:01:32 crc kubenswrapper[4814]: I0216 10:01:32.744247 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 10:01:32 crc kubenswrapper[4814]: I0216 10:01:32.751230 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-42wvd"] Feb 16 10:01:32 crc kubenswrapper[4814]: I0216 10:01:32.823145 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s78hs\" (UniqueName: \"kubernetes.io/projected/229de0b4-8662-4528-af58-7df8bb60935e-kube-api-access-s78hs\") pod \"openstack-operator-index-42wvd\" (UID: \"229de0b4-8662-4528-af58-7df8bb60935e\") " pod="openstack-operators/openstack-operator-index-42wvd" Feb 16 10:01:32 crc kubenswrapper[4814]: I0216 10:01:32.925485 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s78hs\" (UniqueName: \"kubernetes.io/projected/229de0b4-8662-4528-af58-7df8bb60935e-kube-api-access-s78hs\") pod \"openstack-operator-index-42wvd\" (UID: \"229de0b4-8662-4528-af58-7df8bb60935e\") " pod="openstack-operators/openstack-operator-index-42wvd" Feb 16 10:01:32 crc kubenswrapper[4814]: I0216 10:01:32.970229 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s78hs\" (UniqueName: \"kubernetes.io/projected/229de0b4-8662-4528-af58-7df8bb60935e-kube-api-access-s78hs\") pod \"openstack-operator-index-42wvd\" (UID: \"229de0b4-8662-4528-af58-7df8bb60935e\") " pod="openstack-operators/openstack-operator-index-42wvd" Feb 16 10:01:33 crc kubenswrapper[4814]: I0216 10:01:33.080426 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-42wvd" Feb 16 10:01:33 crc kubenswrapper[4814]: I0216 10:01:33.434611 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-42wvd"] Feb 16 10:01:33 crc kubenswrapper[4814]: W0216 10:01:33.440188 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod229de0b4_8662_4528_af58_7df8bb60935e.slice/crio-7fb9b87d3facf5851a997548e475561ca5d971f5fbd3bb1800c90a4404d466e7 WatchSource:0}: Error finding container 7fb9b87d3facf5851a997548e475561ca5d971f5fbd3bb1800c90a4404d466e7: Status 404 returned error can't find the container with id 7fb9b87d3facf5851a997548e475561ca5d971f5fbd3bb1800c90a4404d466e7 Feb 16 10:01:33 crc kubenswrapper[4814]: I0216 10:01:33.680021 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-42wvd" event={"ID":"229de0b4-8662-4528-af58-7df8bb60935e","Type":"ContainerStarted","Data":"7fb9b87d3facf5851a997548e475561ca5d971f5fbd3bb1800c90a4404d466e7"} Feb 16 10:01:36 crc kubenswrapper[4814]: I0216 10:01:36.089974 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-42wvd"] Feb 16 10:01:36 crc kubenswrapper[4814]: I0216 10:01:36.700414 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-txc7s"] Feb 16 10:01:36 crc kubenswrapper[4814]: I0216 10:01:36.701761 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-txc7s" Feb 16 10:01:36 crc kubenswrapper[4814]: I0216 10:01:36.716801 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-txc7s"] Feb 16 10:01:36 crc kubenswrapper[4814]: I0216 10:01:36.799364 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lmks\" (UniqueName: \"kubernetes.io/projected/3f66b0c7-ba80-4484-b02a-07159181c1f2-kube-api-access-4lmks\") pod \"openstack-operator-index-txc7s\" (UID: \"3f66b0c7-ba80-4484-b02a-07159181c1f2\") " pod="openstack-operators/openstack-operator-index-txc7s" Feb 16 10:01:36 crc kubenswrapper[4814]: I0216 10:01:36.901577 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lmks\" (UniqueName: \"kubernetes.io/projected/3f66b0c7-ba80-4484-b02a-07159181c1f2-kube-api-access-4lmks\") pod \"openstack-operator-index-txc7s\" (UID: \"3f66b0c7-ba80-4484-b02a-07159181c1f2\") " pod="openstack-operators/openstack-operator-index-txc7s" Feb 16 10:01:36 crc kubenswrapper[4814]: I0216 10:01:36.927652 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lmks\" (UniqueName: \"kubernetes.io/projected/3f66b0c7-ba80-4484-b02a-07159181c1f2-kube-api-access-4lmks\") pod \"openstack-operator-index-txc7s\" (UID: \"3f66b0c7-ba80-4484-b02a-07159181c1f2\") " pod="openstack-operators/openstack-operator-index-txc7s" Feb 16 10:01:37 crc kubenswrapper[4814]: I0216 10:01:37.045716 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-txc7s" Feb 16 10:01:37 crc kubenswrapper[4814]: I0216 10:01:37.294158 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-txc7s"] Feb 16 10:01:37 crc kubenswrapper[4814]: I0216 10:01:37.715454 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-42wvd" event={"ID":"229de0b4-8662-4528-af58-7df8bb60935e","Type":"ContainerStarted","Data":"7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6"} Feb 16 10:01:37 crc kubenswrapper[4814]: I0216 10:01:37.715590 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-42wvd" podUID="229de0b4-8662-4528-af58-7df8bb60935e" containerName="registry-server" containerID="cri-o://7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6" gracePeriod=2 Feb 16 10:01:37 crc kubenswrapper[4814]: I0216 10:01:37.717519 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-txc7s" event={"ID":"3f66b0c7-ba80-4484-b02a-07159181c1f2","Type":"ContainerStarted","Data":"16b540a3b04b8a050785484f05df849d345c55f6f8ebc1e4c87e09ceea46de0a"} Feb 16 10:01:37 crc kubenswrapper[4814]: I0216 10:01:37.717563 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-txc7s" event={"ID":"3f66b0c7-ba80-4484-b02a-07159181c1f2","Type":"ContainerStarted","Data":"c935ddce73682d229bbee4bcc11dfd58df315b4d67231dc2a13005f52d023920"} Feb 16 10:01:37 crc kubenswrapper[4814]: I0216 10:01:37.741583 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-42wvd" podStartSLOduration=2.59376192 podStartE2EDuration="5.741550864s" podCreationTimestamp="2026-02-16 10:01:32 +0000 UTC" firstStartedPulling="2026-02-16 10:01:33.449812506 +0000 UTC m=+951.142968686" lastFinishedPulling="2026-02-16 10:01:36.59760145 +0000 UTC m=+954.290757630" observedRunningTime="2026-02-16 10:01:37.739154076 +0000 UTC m=+955.432310276" watchObservedRunningTime="2026-02-16 10:01:37.741550864 +0000 UTC m=+955.434707064" Feb 16 10:01:37 crc kubenswrapper[4814]: I0216 10:01:37.761034 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-txc7s" podStartSLOduration=1.7089098539999998 podStartE2EDuration="1.761003402s" podCreationTimestamp="2026-02-16 10:01:36 +0000 UTC" firstStartedPulling="2026-02-16 10:01:37.30880349 +0000 UTC m=+955.001959670" lastFinishedPulling="2026-02-16 10:01:37.360897038 +0000 UTC m=+955.054053218" observedRunningTime="2026-02-16 10:01:37.757119182 +0000 UTC m=+955.450275362" watchObservedRunningTime="2026-02-16 10:01:37.761003402 +0000 UTC m=+955.454159582" Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.088828 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-42wvd" Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.221628 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s78hs\" (UniqueName: \"kubernetes.io/projected/229de0b4-8662-4528-af58-7df8bb60935e-kube-api-access-s78hs\") pod \"229de0b4-8662-4528-af58-7df8bb60935e\" (UID: \"229de0b4-8662-4528-af58-7df8bb60935e\") " Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.229877 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/229de0b4-8662-4528-af58-7df8bb60935e-kube-api-access-s78hs" (OuterVolumeSpecName: "kube-api-access-s78hs") pod "229de0b4-8662-4528-af58-7df8bb60935e" (UID: "229de0b4-8662-4528-af58-7df8bb60935e"). InnerVolumeSpecName "kube-api-access-s78hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.323690 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s78hs\" (UniqueName: \"kubernetes.io/projected/229de0b4-8662-4528-af58-7df8bb60935e-kube-api-access-s78hs\") on node \"crc\" DevicePath \"\"" Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.741365 4814 generic.go:334] "Generic (PLEG): container finished" podID="229de0b4-8662-4528-af58-7df8bb60935e" containerID="7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6" exitCode=0 Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.741425 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-42wvd" Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.741432 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-42wvd" event={"ID":"229de0b4-8662-4528-af58-7df8bb60935e","Type":"ContainerDied","Data":"7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6"} Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.741512 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-42wvd" event={"ID":"229de0b4-8662-4528-af58-7df8bb60935e","Type":"ContainerDied","Data":"7fb9b87d3facf5851a997548e475561ca5d971f5fbd3bb1800c90a4404d466e7"} Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.741554 4814 scope.go:117] "RemoveContainer" containerID="7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6" Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.767386 4814 scope.go:117] "RemoveContainer" containerID="7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6" Feb 16 10:01:38 crc kubenswrapper[4814]: E0216 10:01:38.768246 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6\": container with ID starting with 7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6 not found: ID does not exist" containerID="7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6" Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.768313 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6"} err="failed to get container status \"7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6\": rpc error: code = NotFound desc = could not find container \"7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6\": container with ID starting with 7e9e9a6617d83787303a488031250fd562677123ef7144187b1a0d5542a2bff6 not found: ID does not exist" Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.775219 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-42wvd"] Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.780312 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-42wvd"] Feb 16 10:01:38 crc kubenswrapper[4814]: I0216 10:01:38.910176 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-b965k" Feb 16 10:01:39 crc kubenswrapper[4814]: I0216 10:01:39.005666 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="229de0b4-8662-4528-af58-7df8bb60935e" path="/var/lib/kubelet/pods/229de0b4-8662-4528-af58-7df8bb60935e/volumes" Feb 16 10:01:47 crc kubenswrapper[4814]: I0216 10:01:47.046893 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-txc7s" Feb 16 10:01:47 crc kubenswrapper[4814]: I0216 10:01:47.048246 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-txc7s" Feb 16 10:01:47 crc kubenswrapper[4814]: I0216 10:01:47.084731 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-txc7s" Feb 16 10:01:47 crc kubenswrapper[4814]: I0216 10:01:47.846270 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-txc7s" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.316731 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjt6"] Feb 16 10:01:53 crc kubenswrapper[4814]: E0216 10:01:53.317451 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="229de0b4-8662-4528-af58-7df8bb60935e" containerName="registry-server" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.317466 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="229de0b4-8662-4528-af58-7df8bb60935e" containerName="registry-server" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.317703 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="229de0b4-8662-4528-af58-7df8bb60935e" containerName="registry-server" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.318696 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.334374 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjt6"] Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.378577 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-catalog-content\") pod \"redhat-marketplace-bjjt6\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.378647 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-utilities\") pod \"redhat-marketplace-bjjt6\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.378678 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7th2\" (UniqueName: \"kubernetes.io/projected/adb3cea5-6f9c-469c-9f47-5e9de95ac516-kube-api-access-w7th2\") pod \"redhat-marketplace-bjjt6\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.479834 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-catalog-content\") pod \"redhat-marketplace-bjjt6\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.479900 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-utilities\") pod \"redhat-marketplace-bjjt6\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.479952 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7th2\" (UniqueName: \"kubernetes.io/projected/adb3cea5-6f9c-469c-9f47-5e9de95ac516-kube-api-access-w7th2\") pod \"redhat-marketplace-bjjt6\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.480756 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-catalog-content\") pod \"redhat-marketplace-bjjt6\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.480994 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-utilities\") pod \"redhat-marketplace-bjjt6\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.518459 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7th2\" (UniqueName: \"kubernetes.io/projected/adb3cea5-6f9c-469c-9f47-5e9de95ac516-kube-api-access-w7th2\") pod \"redhat-marketplace-bjjt6\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:01:53 crc kubenswrapper[4814]: I0216 10:01:53.647372 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:01:54 crc kubenswrapper[4814]: I0216 10:01:54.105206 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjt6"] Feb 16 10:01:54 crc kubenswrapper[4814]: W0216 10:01:54.116163 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadb3cea5_6f9c_469c_9f47_5e9de95ac516.slice/crio-8be4ce3908e3e5329885ea5277eb8fd585d7e00269066a48f07d5b670a4be372 WatchSource:0}: Error finding container 8be4ce3908e3e5329885ea5277eb8fd585d7e00269066a48f07d5b670a4be372: Status 404 returned error can't find the container with id 8be4ce3908e3e5329885ea5277eb8fd585d7e00269066a48f07d5b670a4be372 Feb 16 10:01:54 crc kubenswrapper[4814]: I0216 10:01:54.750971 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7"] Feb 16 10:01:54 crc kubenswrapper[4814]: I0216 10:01:54.752747 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:01:54 crc kubenswrapper[4814]: I0216 10:01:54.756032 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-m76ss" Feb 16 10:01:54 crc kubenswrapper[4814]: I0216 10:01:54.771594 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7"] Feb 16 10:01:54 crc kubenswrapper[4814]: I0216 10:01:54.881995 4814 generic.go:334] "Generic (PLEG): container finished" podID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" containerID="71fe9975bd6604e4b47ec8c89919a124f98db3035f29928c3983573f9a9cbe1c" exitCode=0 Feb 16 10:01:54 crc kubenswrapper[4814]: I0216 10:01:54.882123 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjt6" event={"ID":"adb3cea5-6f9c-469c-9f47-5e9de95ac516","Type":"ContainerDied","Data":"71fe9975bd6604e4b47ec8c89919a124f98db3035f29928c3983573f9a9cbe1c"} Feb 16 10:01:54 crc kubenswrapper[4814]: I0216 10:01:54.882470 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjt6" event={"ID":"adb3cea5-6f9c-469c-9f47-5e9de95ac516","Type":"ContainerStarted","Data":"8be4ce3908e3e5329885ea5277eb8fd585d7e00269066a48f07d5b670a4be372"} Feb 16 10:01:54 crc kubenswrapper[4814]: I0216 10:01:54.904192 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-util\") pod \"dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:01:54 crc kubenswrapper[4814]: I0216 10:01:54.904297 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-bundle\") pod \"dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:01:54 crc kubenswrapper[4814]: I0216 10:01:54.904340 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4sk4\" (UniqueName: \"kubernetes.io/projected/ade71140-7224-44bb-bf6d-a15f0af16718-kube-api-access-t4sk4\") pod \"dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.005471 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-util\") pod \"dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.005950 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-bundle\") pod \"dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.006044 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4sk4\" (UniqueName: \"kubernetes.io/projected/ade71140-7224-44bb-bf6d-a15f0af16718-kube-api-access-t4sk4\") pod \"dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.006112 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-util\") pod \"dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.007082 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-bundle\") pod \"dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.040059 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4sk4\" (UniqueName: \"kubernetes.io/projected/ade71140-7224-44bb-bf6d-a15f0af16718-kube-api-access-t4sk4\") pod \"dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.070066 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.375715 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7"] Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.890507 4814 generic.go:334] "Generic (PLEG): container finished" podID="ade71140-7224-44bb-bf6d-a15f0af16718" containerID="0374b7242786261ba64b565f363499c86198d7548c9e7c2543178b70e13c7cf3" exitCode=0 Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.891280 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" event={"ID":"ade71140-7224-44bb-bf6d-a15f0af16718","Type":"ContainerDied","Data":"0374b7242786261ba64b565f363499c86198d7548c9e7c2543178b70e13c7cf3"} Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.891316 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" event={"ID":"ade71140-7224-44bb-bf6d-a15f0af16718","Type":"ContainerStarted","Data":"7535e673a512e7efff4916cf15e6f67e498827e983ef1030af376c06efe9c727"} Feb 16 10:01:55 crc kubenswrapper[4814]: I0216 10:01:55.898992 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjt6" event={"ID":"adb3cea5-6f9c-469c-9f47-5e9de95ac516","Type":"ContainerStarted","Data":"4290b3bd34391b6f185081a23a7bfa1d8946ce44ca79e298f9816eed4f66f2ab"} Feb 16 10:01:56 crc kubenswrapper[4814]: I0216 10:01:56.926819 4814 generic.go:334] "Generic (PLEG): container finished" podID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" containerID="4290b3bd34391b6f185081a23a7bfa1d8946ce44ca79e298f9816eed4f66f2ab" exitCode=0 Feb 16 10:01:56 crc kubenswrapper[4814]: I0216 10:01:56.926927 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjt6" event={"ID":"adb3cea5-6f9c-469c-9f47-5e9de95ac516","Type":"ContainerDied","Data":"4290b3bd34391b6f185081a23a7bfa1d8946ce44ca79e298f9816eed4f66f2ab"} Feb 16 10:01:57 crc kubenswrapper[4814]: I0216 10:01:57.937391 4814 generic.go:334] "Generic (PLEG): container finished" podID="ade71140-7224-44bb-bf6d-a15f0af16718" containerID="9457a74b35c156d0d54c937e4f25e97155af551b363ca195cdecb4c2c150f1dd" exitCode=0 Feb 16 10:01:57 crc kubenswrapper[4814]: I0216 10:01:57.937468 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" event={"ID":"ade71140-7224-44bb-bf6d-a15f0af16718","Type":"ContainerDied","Data":"9457a74b35c156d0d54c937e4f25e97155af551b363ca195cdecb4c2c150f1dd"} Feb 16 10:01:57 crc kubenswrapper[4814]: I0216 10:01:57.943449 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjt6" event={"ID":"adb3cea5-6f9c-469c-9f47-5e9de95ac516","Type":"ContainerStarted","Data":"689d479b38f63fab6c6b85584aa425f64bee26a5f6e9f8e2de5714cecf14fc7f"} Feb 16 10:01:57 crc kubenswrapper[4814]: I0216 10:01:57.984653 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bjjt6" podStartSLOduration=2.538896318 podStartE2EDuration="4.984624711s" podCreationTimestamp="2026-02-16 10:01:53 +0000 UTC" firstStartedPulling="2026-02-16 10:01:54.885379054 +0000 UTC m=+972.578535234" lastFinishedPulling="2026-02-16 10:01:57.331107447 +0000 UTC m=+975.024263627" observedRunningTime="2026-02-16 10:01:57.979707052 +0000 UTC m=+975.672863242" watchObservedRunningTime="2026-02-16 10:01:57.984624711 +0000 UTC m=+975.677780901" Feb 16 10:01:58 crc kubenswrapper[4814]: I0216 10:01:58.956072 4814 generic.go:334] "Generic (PLEG): container finished" podID="ade71140-7224-44bb-bf6d-a15f0af16718" containerID="7221cab96db5a836c2d20965c4cd5f27a038147c7a03d16f3c038867505bcc2a" exitCode=0 Feb 16 10:01:58 crc kubenswrapper[4814]: I0216 10:01:58.956189 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" event={"ID":"ade71140-7224-44bb-bf6d-a15f0af16718","Type":"ContainerDied","Data":"7221cab96db5a836c2d20965c4cd5f27a038147c7a03d16f3c038867505bcc2a"} Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.273152 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.407497 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-util\") pod \"ade71140-7224-44bb-bf6d-a15f0af16718\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.407608 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4sk4\" (UniqueName: \"kubernetes.io/projected/ade71140-7224-44bb-bf6d-a15f0af16718-kube-api-access-t4sk4\") pod \"ade71140-7224-44bb-bf6d-a15f0af16718\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.407638 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-bundle\") pod \"ade71140-7224-44bb-bf6d-a15f0af16718\" (UID: \"ade71140-7224-44bb-bf6d-a15f0af16718\") " Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.409430 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-bundle" (OuterVolumeSpecName: "bundle") pod "ade71140-7224-44bb-bf6d-a15f0af16718" (UID: "ade71140-7224-44bb-bf6d-a15f0af16718"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.415920 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade71140-7224-44bb-bf6d-a15f0af16718-kube-api-access-t4sk4" (OuterVolumeSpecName: "kube-api-access-t4sk4") pod "ade71140-7224-44bb-bf6d-a15f0af16718" (UID: "ade71140-7224-44bb-bf6d-a15f0af16718"). InnerVolumeSpecName "kube-api-access-t4sk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.428622 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-util" (OuterVolumeSpecName: "util") pod "ade71140-7224-44bb-bf6d-a15f0af16718" (UID: "ade71140-7224-44bb-bf6d-a15f0af16718"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.509687 4814 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-util\") on node \"crc\" DevicePath \"\"" Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.509755 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4sk4\" (UniqueName: \"kubernetes.io/projected/ade71140-7224-44bb-bf6d-a15f0af16718-kube-api-access-t4sk4\") on node \"crc\" DevicePath \"\"" Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.509779 4814 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ade71140-7224-44bb-bf6d-a15f0af16718-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.976626 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" event={"ID":"ade71140-7224-44bb-bf6d-a15f0af16718","Type":"ContainerDied","Data":"7535e673a512e7efff4916cf15e6f67e498827e983ef1030af376c06efe9c727"} Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.976713 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7535e673a512e7efff4916cf15e6f67e498827e983ef1030af376c06efe9c727" Feb 16 10:02:00 crc kubenswrapper[4814]: I0216 10:02:00.976670 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7" Feb 16 10:02:03 crc kubenswrapper[4814]: I0216 10:02:03.648411 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:02:03 crc kubenswrapper[4814]: I0216 10:02:03.648779 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:02:03 crc kubenswrapper[4814]: I0216 10:02:03.696270 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.028220 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-68c79ff849-568kl"] Feb 16 10:02:04 crc kubenswrapper[4814]: E0216 10:02:04.028659 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade71140-7224-44bb-bf6d-a15f0af16718" containerName="util" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.028685 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade71140-7224-44bb-bf6d-a15f0af16718" containerName="util" Feb 16 10:02:04 crc kubenswrapper[4814]: E0216 10:02:04.028711 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade71140-7224-44bb-bf6d-a15f0af16718" containerName="extract" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.028720 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade71140-7224-44bb-bf6d-a15f0af16718" containerName="extract" Feb 16 10:02:04 crc kubenswrapper[4814]: E0216 10:02:04.028740 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade71140-7224-44bb-bf6d-a15f0af16718" containerName="pull" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.028749 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade71140-7224-44bb-bf6d-a15f0af16718" containerName="pull" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.033337 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade71140-7224-44bb-bf6d-a15f0af16718" containerName="extract" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.034177 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68c79ff849-568kl" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.037042 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-7qf2g" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.051198 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68c79ff849-568kl"] Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.059684 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.168501 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbvn2\" (UniqueName: \"kubernetes.io/projected/64658bf5-6ea3-4442-a3f1-fe3b1e2fdace-kube-api-access-wbvn2\") pod \"openstack-operator-controller-init-68c79ff849-568kl\" (UID: \"64658bf5-6ea3-4442-a3f1-fe3b1e2fdace\") " pod="openstack-operators/openstack-operator-controller-init-68c79ff849-568kl" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.270316 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbvn2\" (UniqueName: \"kubernetes.io/projected/64658bf5-6ea3-4442-a3f1-fe3b1e2fdace-kube-api-access-wbvn2\") pod \"openstack-operator-controller-init-68c79ff849-568kl\" (UID: \"64658bf5-6ea3-4442-a3f1-fe3b1e2fdace\") " pod="openstack-operators/openstack-operator-controller-init-68c79ff849-568kl" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.291800 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbvn2\" (UniqueName: \"kubernetes.io/projected/64658bf5-6ea3-4442-a3f1-fe3b1e2fdace-kube-api-access-wbvn2\") pod \"openstack-operator-controller-init-68c79ff849-568kl\" (UID: \"64658bf5-6ea3-4442-a3f1-fe3b1e2fdace\") " pod="openstack-operators/openstack-operator-controller-init-68c79ff849-568kl" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.355937 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-68c79ff849-568kl" Feb 16 10:02:04 crc kubenswrapper[4814]: I0216 10:02:04.646014 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-68c79ff849-568kl"] Feb 16 10:02:05 crc kubenswrapper[4814]: I0216 10:02:05.027523 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68c79ff849-568kl" event={"ID":"64658bf5-6ea3-4442-a3f1-fe3b1e2fdace","Type":"ContainerStarted","Data":"45224962d1024c40a87662b870daed54fa6545f30a96564296b38fdb88b9bef4"} Feb 16 10:02:06 crc kubenswrapper[4814]: I0216 10:02:06.091419 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjt6"] Feb 16 10:02:06 crc kubenswrapper[4814]: I0216 10:02:06.092212 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bjjt6" podUID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" containerName="registry-server" containerID="cri-o://689d479b38f63fab6c6b85584aa425f64bee26a5f6e9f8e2de5714cecf14fc7f" gracePeriod=2 Feb 16 10:02:07 crc kubenswrapper[4814]: I0216 10:02:07.050379 4814 generic.go:334] "Generic (PLEG): container finished" podID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" containerID="689d479b38f63fab6c6b85584aa425f64bee26a5f6e9f8e2de5714cecf14fc7f" exitCode=0 Feb 16 10:02:07 crc kubenswrapper[4814]: I0216 10:02:07.050456 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjt6" event={"ID":"adb3cea5-6f9c-469c-9f47-5e9de95ac516","Type":"ContainerDied","Data":"689d479b38f63fab6c6b85584aa425f64bee26a5f6e9f8e2de5714cecf14fc7f"} Feb 16 10:02:07 crc kubenswrapper[4814]: I0216 10:02:07.960453 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:02:07 crc kubenswrapper[4814]: I0216 10:02:07.960567 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:02:09 crc kubenswrapper[4814]: I0216 10:02:09.167357 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:02:09 crc kubenswrapper[4814]: I0216 10:02:09.265581 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7th2\" (UniqueName: \"kubernetes.io/projected/adb3cea5-6f9c-469c-9f47-5e9de95ac516-kube-api-access-w7th2\") pod \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " Feb 16 10:02:09 crc kubenswrapper[4814]: I0216 10:02:09.265751 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-utilities\") pod \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " Feb 16 10:02:09 crc kubenswrapper[4814]: I0216 10:02:09.265952 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-catalog-content\") pod \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\" (UID: \"adb3cea5-6f9c-469c-9f47-5e9de95ac516\") " Feb 16 10:02:09 crc kubenswrapper[4814]: I0216 10:02:09.266884 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-utilities" (OuterVolumeSpecName: "utilities") pod "adb3cea5-6f9c-469c-9f47-5e9de95ac516" (UID: "adb3cea5-6f9c-469c-9f47-5e9de95ac516"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:02:09 crc kubenswrapper[4814]: I0216 10:02:09.277377 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb3cea5-6f9c-469c-9f47-5e9de95ac516-kube-api-access-w7th2" (OuterVolumeSpecName: "kube-api-access-w7th2") pod "adb3cea5-6f9c-469c-9f47-5e9de95ac516" (UID: "adb3cea5-6f9c-469c-9f47-5e9de95ac516"). InnerVolumeSpecName "kube-api-access-w7th2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:02:09 crc kubenswrapper[4814]: I0216 10:02:09.299434 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adb3cea5-6f9c-469c-9f47-5e9de95ac516" (UID: "adb3cea5-6f9c-469c-9f47-5e9de95ac516"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:02:09 crc kubenswrapper[4814]: I0216 10:02:09.367625 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:02:09 crc kubenswrapper[4814]: I0216 10:02:09.367667 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7th2\" (UniqueName: \"kubernetes.io/projected/adb3cea5-6f9c-469c-9f47-5e9de95ac516-kube-api-access-w7th2\") on node \"crc\" DevicePath \"\"" Feb 16 10:02:09 crc kubenswrapper[4814]: I0216 10:02:09.367692 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adb3cea5-6f9c-469c-9f47-5e9de95ac516-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:02:10 crc kubenswrapper[4814]: I0216 10:02:10.087070 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjt6" event={"ID":"adb3cea5-6f9c-469c-9f47-5e9de95ac516","Type":"ContainerDied","Data":"8be4ce3908e3e5329885ea5277eb8fd585d7e00269066a48f07d5b670a4be372"} Feb 16 10:02:10 crc kubenswrapper[4814]: I0216 10:02:10.087589 4814 scope.go:117] "RemoveContainer" containerID="689d479b38f63fab6c6b85584aa425f64bee26a5f6e9f8e2de5714cecf14fc7f" Feb 16 10:02:10 crc kubenswrapper[4814]: I0216 10:02:10.087420 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjjt6" Feb 16 10:02:10 crc kubenswrapper[4814]: I0216 10:02:10.128397 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjt6"] Feb 16 10:02:10 crc kubenswrapper[4814]: I0216 10:02:10.135864 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjt6"] Feb 16 10:02:10 crc kubenswrapper[4814]: I0216 10:02:10.516026 4814 scope.go:117] "RemoveContainer" containerID="4290b3bd34391b6f185081a23a7bfa1d8946ce44ca79e298f9816eed4f66f2ab" Feb 16 10:02:10 crc kubenswrapper[4814]: I0216 10:02:10.542754 4814 scope.go:117] "RemoveContainer" containerID="71fe9975bd6604e4b47ec8c89919a124f98db3035f29928c3983573f9a9cbe1c" Feb 16 10:02:11 crc kubenswrapper[4814]: I0216 10:02:11.002153 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" path="/var/lib/kubelet/pods/adb3cea5-6f9c-469c-9f47-5e9de95ac516/volumes" Feb 16 10:02:11 crc kubenswrapper[4814]: I0216 10:02:11.096126 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-68c79ff849-568kl" event={"ID":"64658bf5-6ea3-4442-a3f1-fe3b1e2fdace","Type":"ContainerStarted","Data":"1f3682f85ea221af3bfa8b3addb6ae9ecf7c852a72e3d44de961a5e187d0df10"} Feb 16 10:02:11 crc kubenswrapper[4814]: I0216 10:02:11.096336 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-68c79ff849-568kl" Feb 16 10:02:11 crc kubenswrapper[4814]: I0216 10:02:11.128087 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-68c79ff849-568kl" podStartSLOduration=2.240784566 podStartE2EDuration="8.128059912s" podCreationTimestamp="2026-02-16 10:02:03 +0000 UTC" firstStartedPulling="2026-02-16 10:02:04.660898527 +0000 UTC m=+982.354054707" lastFinishedPulling="2026-02-16 10:02:10.548173863 +0000 UTC m=+988.241330053" observedRunningTime="2026-02-16 10:02:11.126289352 +0000 UTC m=+988.819445532" watchObservedRunningTime="2026-02-16 10:02:11.128059912 +0000 UTC m=+988.821216092" Feb 16 10:02:24 crc kubenswrapper[4814]: I0216 10:02:24.358630 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-68c79ff849-568kl" Feb 16 10:02:37 crc kubenswrapper[4814]: I0216 10:02:37.960681 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:02:37 crc kubenswrapper[4814]: I0216 10:02:37.961585 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.613394 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x"] Feb 16 10:02:44 crc kubenswrapper[4814]: E0216 10:02:44.614709 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" containerName="extract-utilities" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.614729 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" containerName="extract-utilities" Feb 16 10:02:44 crc kubenswrapper[4814]: E0216 10:02:44.614756 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" containerName="registry-server" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.614764 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" containerName="registry-server" Feb 16 10:02:44 crc kubenswrapper[4814]: E0216 10:02:44.614781 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" containerName="extract-content" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.614789 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" containerName="extract-content" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.614916 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb3cea5-6f9c-469c-9f47-5e9de95ac516" containerName="registry-server" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.615641 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.617878 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lbm4r" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.631001 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.632050 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.640371 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rtstz" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.643107 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.661517 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-kmskc"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.662841 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.668294 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-wbcg5" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.681154 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.682606 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.686018 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-2xq45" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.697188 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.705061 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-kmskc"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.732017 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.745018 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.746406 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.752340 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-7bvrp" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.761639 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.763009 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.771083 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-q65hb" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.789062 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.793526 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8th\" (UniqueName: \"kubernetes.io/projected/5dce01de-2987-428e-8e82-916685ec38d0-kube-api-access-7n8th\") pod \"glance-operator-controller-manager-77987464f4-kmskc\" (UID: \"5dce01de-2987-428e-8e82-916685ec38d0\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.793614 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhnxp\" (UniqueName: \"kubernetes.io/projected/2ffba7b1-f1c7-4422-bbd2-240022e594a9-kube-api-access-vhnxp\") pod \"designate-operator-controller-manager-6d8bf5c495-9ltsr\" (UID: \"2ffba7b1-f1c7-4422-bbd2-240022e594a9\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.793694 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwt79\" (UniqueName: \"kubernetes.io/projected/2e5a39bc-3922-4dfd-b2c9-5ff4ebbeeb74-kube-api-access-mwt79\") pod \"cinder-operator-controller-manager-5d946d989d-shv45\" (UID: \"2e5a39bc-3922-4dfd-b2c9-5ff4ebbeeb74\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.793720 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf27f\" (UniqueName: \"kubernetes.io/projected/96b8a99b-83ce-4d62-b471-a8bcc47aa67a-kube-api-access-hf27f\") pod \"barbican-operator-controller-manager-868647ff47-ndn8x\" (UID: \"96b8a99b-83ce-4d62-b471-a8bcc47aa67a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.795469 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-5fwts"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.796574 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.807256 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.808355 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.809062 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.809409 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zkw7w" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.828654 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-5fwts"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.828750 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.834612 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.836639 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-54z96" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.854658 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.856032 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.861956 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.880091 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vfjzk" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.897698 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fb6d\" (UniqueName: \"kubernetes.io/projected/cd61e4fa-ce01-4597-9f4c-e90419b3c582-kube-api-access-4fb6d\") pod \"infra-operator-controller-manager-79d975b745-5fwts\" (UID: \"cd61e4fa-ce01-4597-9f4c-e90419b3c582\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.897787 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n8th\" (UniqueName: \"kubernetes.io/projected/5dce01de-2987-428e-8e82-916685ec38d0-kube-api-access-7n8th\") pod \"glance-operator-controller-manager-77987464f4-kmskc\" (UID: \"5dce01de-2987-428e-8e82-916685ec38d0\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.897839 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhnxp\" (UniqueName: \"kubernetes.io/projected/2ffba7b1-f1c7-4422-bbd2-240022e594a9-kube-api-access-vhnxp\") pod \"designate-operator-controller-manager-6d8bf5c495-9ltsr\" (UID: \"2ffba7b1-f1c7-4422-bbd2-240022e594a9\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.897916 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwt79\" (UniqueName: \"kubernetes.io/projected/2e5a39bc-3922-4dfd-b2c9-5ff4ebbeeb74-kube-api-access-mwt79\") pod \"cinder-operator-controller-manager-5d946d989d-shv45\" (UID: \"2e5a39bc-3922-4dfd-b2c9-5ff4ebbeeb74\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.897949 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq9fk\" (UniqueName: \"kubernetes.io/projected/d6383f25-e9d4-4606-aa4a-fd1ed2b9299c-kube-api-access-dq9fk\") pod \"ironic-operator-controller-manager-554564d7fc-mscb9\" (UID: \"d6383f25-e9d4-4606-aa4a-fd1ed2b9299c\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.897975 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf27f\" (UniqueName: \"kubernetes.io/projected/96b8a99b-83ce-4d62-b471-a8bcc47aa67a-kube-api-access-hf27f\") pod \"barbican-operator-controller-manager-868647ff47-ndn8x\" (UID: \"96b8a99b-83ce-4d62-b471-a8bcc47aa67a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.898008 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert\") pod \"infra-operator-controller-manager-79d975b745-5fwts\" (UID: \"cd61e4fa-ce01-4597-9f4c-e90419b3c582\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.898043 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlc27\" (UniqueName: \"kubernetes.io/projected/e763fa22-f350-4b3c-930e-f115981b2cd5-kube-api-access-jlc27\") pod \"heat-operator-controller-manager-69f49c598c-mrqpp\" (UID: \"e763fa22-f350-4b3c-930e-f115981b2cd5\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.898072 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhzvp\" (UniqueName: \"kubernetes.io/projected/2d17d4ba-3b70-4b99-808c-a9fb764754a4-kube-api-access-vhzvp\") pod \"horizon-operator-controller-manager-5b9b8895d5-dl9md\" (UID: \"2d17d4ba-3b70-4b99-808c-a9fb764754a4\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.903610 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.905043 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.931139 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9bkqp" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.935521 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.936896 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.939631 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.947844 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gxnhr" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.949606 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwt79\" (UniqueName: \"kubernetes.io/projected/2e5a39bc-3922-4dfd-b2c9-5ff4ebbeeb74-kube-api-access-mwt79\") pod \"cinder-operator-controller-manager-5d946d989d-shv45\" (UID: \"2e5a39bc-3922-4dfd-b2c9-5ff4ebbeeb74\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.958223 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.959461 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b"] Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.960258 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf27f\" (UniqueName: \"kubernetes.io/projected/96b8a99b-83ce-4d62-b471-a8bcc47aa67a-kube-api-access-hf27f\") pod \"barbican-operator-controller-manager-868647ff47-ndn8x\" (UID: \"96b8a99b-83ce-4d62-b471-a8bcc47aa67a\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x" Feb 16 10:02:44 crc kubenswrapper[4814]: I0216 10:02:44.968195 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n8th\" (UniqueName: \"kubernetes.io/projected/5dce01de-2987-428e-8e82-916685ec38d0-kube-api-access-7n8th\") pod \"glance-operator-controller-manager-77987464f4-kmskc\" (UID: \"5dce01de-2987-428e-8e82-916685ec38d0\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.008695 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhnxp\" (UniqueName: \"kubernetes.io/projected/2ffba7b1-f1c7-4422-bbd2-240022e594a9-kube-api-access-vhnxp\") pod \"designate-operator-controller-manager-6d8bf5c495-9ltsr\" (UID: \"2ffba7b1-f1c7-4422-bbd2-240022e594a9\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.010710 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.130781 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fb6d\" (UniqueName: \"kubernetes.io/projected/cd61e4fa-ce01-4597-9f4c-e90419b3c582-kube-api-access-4fb6d\") pod \"infra-operator-controller-manager-79d975b745-5fwts\" (UID: \"cd61e4fa-ce01-4597-9f4c-e90419b3c582\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.168358 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcn4r\" (UniqueName: \"kubernetes.io/projected/7282bc18-ffbd-4680-abb9-40dbe56ad895-kube-api-access-zcn4r\") pod \"keystone-operator-controller-manager-b4d948c87-f6jgb\" (UID: \"7282bc18-ffbd-4680-abb9-40dbe56ad895\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.168684 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k22p\" (UniqueName: \"kubernetes.io/projected/e720ed93-e990-4508-ad82-cd7c7d097e9c-kube-api-access-8k22p\") pod \"mariadb-operator-controller-manager-6994f66f48-wv8lv\" (UID: \"e720ed93-e990-4508-ad82-cd7c7d097e9c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.168809 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq9fk\" (UniqueName: \"kubernetes.io/projected/d6383f25-e9d4-4606-aa4a-fd1ed2b9299c-kube-api-access-dq9fk\") pod \"ironic-operator-controller-manager-554564d7fc-mscb9\" (UID: \"d6383f25-e9d4-4606-aa4a-fd1ed2b9299c\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.168899 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert\") pod \"infra-operator-controller-manager-79d975b745-5fwts\" (UID: \"cd61e4fa-ce01-4597-9f4c-e90419b3c582\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.168945 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlc27\" (UniqueName: \"kubernetes.io/projected/e763fa22-f350-4b3c-930e-f115981b2cd5-kube-api-access-jlc27\") pod \"heat-operator-controller-manager-69f49c598c-mrqpp\" (UID: \"e763fa22-f350-4b3c-930e-f115981b2cd5\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.168970 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhzvp\" (UniqueName: \"kubernetes.io/projected/2d17d4ba-3b70-4b99-808c-a9fb764754a4-kube-api-access-vhzvp\") pod \"horizon-operator-controller-manager-5b9b8895d5-dl9md\" (UID: \"2d17d4ba-3b70-4b99-808c-a9fb764754a4\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md" Feb 16 10:02:45 crc kubenswrapper[4814]: E0216 10:02:45.184128 4814 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 10:02:45 crc kubenswrapper[4814]: E0216 10:02:45.184268 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert podName:cd61e4fa-ce01-4597-9f4c-e90419b3c582 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:45.684225489 +0000 UTC m=+1023.377381669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert") pod "infra-operator-controller-manager-79d975b745-5fwts" (UID: "cd61e4fa-ce01-4597-9f4c-e90419b3c582") : secret "infra-operator-webhook-server-cert" not found Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.192826 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.194182 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.214034 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mrd77" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.228171 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fb6d\" (UniqueName: \"kubernetes.io/projected/cd61e4fa-ce01-4597-9f4c-e90419b3c582-kube-api-access-4fb6d\") pod \"infra-operator-controller-manager-79d975b745-5fwts\" (UID: \"cd61e4fa-ce01-4597-9f4c-e90419b3c582\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.229439 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhzvp\" (UniqueName: \"kubernetes.io/projected/2d17d4ba-3b70-4b99-808c-a9fb764754a4-kube-api-access-vhzvp\") pod \"horizon-operator-controller-manager-5b9b8895d5-dl9md\" (UID: \"2d17d4ba-3b70-4b99-808c-a9fb764754a4\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.246439 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.252604 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.253813 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.272964 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.273050 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.277038 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.277767 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpsb5\" (UniqueName: \"kubernetes.io/projected/0808e383-92fc-4af4-82c1-7324a6729e7a-kube-api-access-jpsb5\") pod \"manila-operator-controller-manager-54f6768c69-h5w4b\" (UID: \"0808e383-92fc-4af4-82c1-7324a6729e7a\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.277861 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k22p\" (UniqueName: \"kubernetes.io/projected/e720ed93-e990-4508-ad82-cd7c7d097e9c-kube-api-access-8k22p\") pod \"mariadb-operator-controller-manager-6994f66f48-wv8lv\" (UID: \"e720ed93-e990-4508-ad82-cd7c7d097e9c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.277994 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcn4r\" (UniqueName: \"kubernetes.io/projected/7282bc18-ffbd-4680-abb9-40dbe56ad895-kube-api-access-zcn4r\") pod \"keystone-operator-controller-manager-b4d948c87-f6jgb\" (UID: \"7282bc18-ffbd-4680-abb9-40dbe56ad895\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.283598 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq9fk\" (UniqueName: \"kubernetes.io/projected/d6383f25-e9d4-4606-aa4a-fd1ed2b9299c-kube-api-access-dq9fk\") pod \"ironic-operator-controller-manager-554564d7fc-mscb9\" (UID: \"d6383f25-e9d4-4606-aa4a-fd1ed2b9299c\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.288779 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n5lxw" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.291824 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-qw9hd" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.292774 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlc27\" (UniqueName: \"kubernetes.io/projected/e763fa22-f350-4b3c-930e-f115981b2cd5-kube-api-access-jlc27\") pod \"heat-operator-controller-manager-69f49c598c-mrqpp\" (UID: \"e763fa22-f350-4b3c-930e-f115981b2cd5\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.358722 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcn4r\" (UniqueName: \"kubernetes.io/projected/7282bc18-ffbd-4680-abb9-40dbe56ad895-kube-api-access-zcn4r\") pod \"keystone-operator-controller-manager-b4d948c87-f6jgb\" (UID: \"7282bc18-ffbd-4680-abb9-40dbe56ad895\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.363308 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.376489 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.376942 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.379423 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k22p\" (UniqueName: \"kubernetes.io/projected/e720ed93-e990-4508-ad82-cd7c7d097e9c-kube-api-access-8k22p\") pod \"mariadb-operator-controller-manager-6994f66f48-wv8lv\" (UID: \"e720ed93-e990-4508-ad82-cd7c7d097e9c\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.380451 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8d2\" (UniqueName: \"kubernetes.io/projected/aaa14470-c664-49a4-88f4-d48c9c2f7eda-kube-api-access-vd8d2\") pod \"neutron-operator-controller-manager-64ddbf8bb-qstdq\" (UID: \"aaa14470-c664-49a4-88f4-d48c9c2f7eda\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.380523 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg42b\" (UniqueName: \"kubernetes.io/projected/27612122-6b3e-468c-9050-ff180e9212d8-kube-api-access-hg42b\") pod \"nova-operator-controller-manager-567668f5cf-qbxxf\" (UID: \"27612122-6b3e-468c-9050-ff180e9212d8\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.380573 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpsb5\" (UniqueName: \"kubernetes.io/projected/0808e383-92fc-4af4-82c1-7324a6729e7a-kube-api-access-jpsb5\") pod \"manila-operator-controller-manager-54f6768c69-h5w4b\" (UID: \"0808e383-92fc-4af4-82c1-7324a6729e7a\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.380607 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb648\" (UniqueName: \"kubernetes.io/projected/57a9e823-2475-4a15-9ac0-1cd8b4f0197c-kube-api-access-pb648\") pod \"octavia-operator-controller-manager-69f8888797-f9l2v\" (UID: \"57a9e823-2475-4a15-9ac0-1cd8b4f0197c\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.394308 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.419898 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpsb5\" (UniqueName: \"kubernetes.io/projected/0808e383-92fc-4af4-82c1-7324a6729e7a-kube-api-access-jpsb5\") pod \"manila-operator-controller-manager-54f6768c69-h5w4b\" (UID: \"0808e383-92fc-4af4-82c1-7324a6729e7a\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.420152 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.453157 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.479856 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.482686 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg42b\" (UniqueName: \"kubernetes.io/projected/27612122-6b3e-468c-9050-ff180e9212d8-kube-api-access-hg42b\") pod \"nova-operator-controller-manager-567668f5cf-qbxxf\" (UID: \"27612122-6b3e-468c-9050-ff180e9212d8\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.498932 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb648\" (UniqueName: \"kubernetes.io/projected/57a9e823-2475-4a15-9ac0-1cd8b4f0197c-kube-api-access-pb648\") pod \"octavia-operator-controller-manager-69f8888797-f9l2v\" (UID: \"57a9e823-2475-4a15-9ac0-1cd8b4f0197c\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.499178 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd8d2\" (UniqueName: \"kubernetes.io/projected/aaa14470-c664-49a4-88f4-d48c9c2f7eda-kube-api-access-vd8d2\") pod \"neutron-operator-controller-manager-64ddbf8bb-qstdq\" (UID: \"aaa14470-c664-49a4-88f4-d48c9c2f7eda\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.499870 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.501098 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.519399 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.522307 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.531272 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.532446 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.533315 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.534940 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-fxzd4" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.554687 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.556764 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.569512 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.583702 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.601262 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxkj7\" (UniqueName: \"kubernetes.io/projected/fea081c6-407f-4dd4-958f-0d567d0df233-kube-api-access-pxkj7\") pod \"ovn-operator-controller-manager-d44cf6b75-sl5wn\" (UID: \"fea081c6-407f-4dd4-958f-0d567d0df233\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.601868 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.609179 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.609324 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.610714 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.617620 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7dl25"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.618854 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.619260 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.620318 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.632163 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-mk7vg" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.632346 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-t8bm9" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.632526 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-74pvw" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.632736 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6s97x" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.641320 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb648\" (UniqueName: \"kubernetes.io/projected/57a9e823-2475-4a15-9ac0-1cd8b4f0197c-kube-api-access-pb648\") pod \"octavia-operator-controller-manager-69f8888797-f9l2v\" (UID: \"57a9e823-2475-4a15-9ac0-1cd8b4f0197c\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.642517 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-kfk7h" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.645376 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg42b\" (UniqueName: \"kubernetes.io/projected/27612122-6b3e-468c-9050-ff180e9212d8-kube-api-access-hg42b\") pod \"nova-operator-controller-manager-567668f5cf-qbxxf\" (UID: \"27612122-6b3e-468c-9050-ff180e9212d8\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.693623 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd8d2\" (UniqueName: \"kubernetes.io/projected/aaa14470-c664-49a4-88f4-d48c9c2f7eda-kube-api-access-vd8d2\") pod \"neutron-operator-controller-manager-64ddbf8bb-qstdq\" (UID: \"aaa14470-c664-49a4-88f4-d48c9c2f7eda\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.704164 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.704214 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkzv7\" (UniqueName: \"kubernetes.io/projected/e9d0d20b-f520-4a52-93d5-02fa13273625-kube-api-access-lkzv7\") pod \"placement-operator-controller-manager-8497b45c89-rtsgp\" (UID: \"e9d0d20b-f520-4a52-93d5-02fa13273625\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.704270 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert\") pod \"infra-operator-controller-manager-79d975b745-5fwts\" (UID: \"cd61e4fa-ce01-4597-9f4c-e90419b3c582\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.704320 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxkj7\" (UniqueName: \"kubernetes.io/projected/fea081c6-407f-4dd4-958f-0d567d0df233-kube-api-access-pxkj7\") pod \"ovn-operator-controller-manager-d44cf6b75-sl5wn\" (UID: \"fea081c6-407f-4dd4-958f-0d567d0df233\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.704349 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdssq\" (UniqueName: \"kubernetes.io/projected/0e3cc780-e5be-4808-b9c3-d514994ce8cb-kube-api-access-kdssq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-wh9lm\" (UID: \"0e3cc780-e5be-4808-b9c3-d514994ce8cb\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.704379 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p492m\" (UniqueName: \"kubernetes.io/projected/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-kube-api-access-p492m\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:02:45 crc kubenswrapper[4814]: E0216 10:02:45.704577 4814 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 10:02:45 crc kubenswrapper[4814]: E0216 10:02:45.704644 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert podName:cd61e4fa-ce01-4597-9f4c-e90419b3c582 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:46.704623521 +0000 UTC m=+1024.397779691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert") pod "infra-operator-controller-manager-79d975b745-5fwts" (UID: "cd61e4fa-ce01-4597-9f4c-e90419b3c582") : secret "infra-operator-webhook-server-cert" not found Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.711095 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7dl25"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.732902 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.734418 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.742788 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.744004 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.752599 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qp2hg" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.776054 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.805950 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qxbw\" (UniqueName: \"kubernetes.io/projected/12f8611d-0069-4ea0-a926-3f7c34ac5424-kube-api-access-9qxbw\") pod \"swift-operator-controller-manager-68f46476f-5pd8h\" (UID: \"12f8611d-0069-4ea0-a926-3f7c34ac5424\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.806088 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdssq\" (UniqueName: \"kubernetes.io/projected/0e3cc780-e5be-4808-b9c3-d514994ce8cb-kube-api-access-kdssq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-wh9lm\" (UID: \"0e3cc780-e5be-4808-b9c3-d514994ce8cb\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.806137 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p492m\" (UniqueName: \"kubernetes.io/projected/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-kube-api-access-p492m\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.806174 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kg6r\" (UniqueName: \"kubernetes.io/projected/3a2d26bf-3be8-48a8-845d-ea10f5196876-kube-api-access-5kg6r\") pod \"test-operator-controller-manager-7866795846-7dl25\" (UID: \"3a2d26bf-3be8-48a8-845d-ea10f5196876\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.806206 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.806226 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkzv7\" (UniqueName: \"kubernetes.io/projected/e9d0d20b-f520-4a52-93d5-02fa13273625-kube-api-access-lkzv7\") pod \"placement-operator-controller-manager-8497b45c89-rtsgp\" (UID: \"e9d0d20b-f520-4a52-93d5-02fa13273625\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" Feb 16 10:02:45 crc kubenswrapper[4814]: E0216 10:02:45.807058 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:02:45 crc kubenswrapper[4814]: E0216 10:02:45.807118 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert podName:a9e0b3a6-0817-4c54-acf5-11145e9e0dab nodeName:}" failed. No retries permitted until 2026-02-16 10:02:46.307096809 +0000 UTC m=+1024.000252989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" (UID: "a9e0b3a6-0817-4c54-acf5-11145e9e0dab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.812312 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxkj7\" (UniqueName: \"kubernetes.io/projected/fea081c6-407f-4dd4-958f-0d567d0df233-kube-api-access-pxkj7\") pod \"ovn-operator-controller-manager-d44cf6b75-sl5wn\" (UID: \"fea081c6-407f-4dd4-958f-0d567d0df233\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.847117 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdssq\" (UniqueName: \"kubernetes.io/projected/0e3cc780-e5be-4808-b9c3-d514994ce8cb-kube-api-access-kdssq\") pod \"telemetry-operator-controller-manager-7f45b4ff68-wh9lm\" (UID: \"0e3cc780-e5be-4808-b9c3-d514994ce8cb\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.872861 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.893552 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p492m\" (UniqueName: \"kubernetes.io/projected/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-kube-api-access-p492m\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.904547 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkzv7\" (UniqueName: \"kubernetes.io/projected/e9d0d20b-f520-4a52-93d5-02fa13273625-kube-api-access-lkzv7\") pod \"placement-operator-controller-manager-8497b45c89-rtsgp\" (UID: \"e9d0d20b-f520-4a52-93d5-02fa13273625\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.948722 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.953583 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qxbw\" (UniqueName: \"kubernetes.io/projected/12f8611d-0069-4ea0-a926-3f7c34ac5424-kube-api-access-9qxbw\") pod \"swift-operator-controller-manager-68f46476f-5pd8h\" (UID: \"12f8611d-0069-4ea0-a926-3f7c34ac5424\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.953701 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp295\" (UniqueName: \"kubernetes.io/projected/c436a9b9-dacb-4c82-b799-117453b8c695-kube-api-access-vp295\") pod \"watcher-operator-controller-manager-7787dfc59c-cx6k2\" (UID: \"c436a9b9-dacb-4c82-b799-117453b8c695\") " pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.962689 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kg6r\" (UniqueName: \"kubernetes.io/projected/3a2d26bf-3be8-48a8-845d-ea10f5196876-kube-api-access-5kg6r\") pod \"test-operator-controller-manager-7866795846-7dl25\" (UID: \"3a2d26bf-3be8-48a8-845d-ea10f5196876\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.966637 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2"] Feb 16 10:02:45 crc kubenswrapper[4814]: I0216 10:02:45.976496 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.018491 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.071139 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp295\" (UniqueName: \"kubernetes.io/projected/c436a9b9-dacb-4c82-b799-117453b8c695-kube-api-access-vp295\") pod \"watcher-operator-controller-manager-7787dfc59c-cx6k2\" (UID: \"c436a9b9-dacb-4c82-b799-117453b8c695\") " pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.095993 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kg6r\" (UniqueName: \"kubernetes.io/projected/3a2d26bf-3be8-48a8-845d-ea10f5196876-kube-api-access-5kg6r\") pod \"test-operator-controller-manager-7866795846-7dl25\" (UID: \"3a2d26bf-3be8-48a8-845d-ea10f5196876\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.107323 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qxbw\" (UniqueName: \"kubernetes.io/projected/12f8611d-0069-4ea0-a926-3f7c34ac5424-kube-api-access-9qxbw\") pod \"swift-operator-controller-manager-68f46476f-5pd8h\" (UID: \"12f8611d-0069-4ea0-a926-3f7c34ac5424\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.164987 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp295\" (UniqueName: \"kubernetes.io/projected/c436a9b9-dacb-4c82-b799-117453b8c695-kube-api-access-vp295\") pod \"watcher-operator-controller-manager-7787dfc59c-cx6k2\" (UID: \"c436a9b9-dacb-4c82-b799-117453b8c695\") " pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.335178 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2"] Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.379684 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:02:46 crc kubenswrapper[4814]: E0216 10:02:46.381360 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:02:46 crc kubenswrapper[4814]: E0216 10:02:46.381445 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert podName:a9e0b3a6-0817-4c54-acf5-11145e9e0dab nodeName:}" failed. No retries permitted until 2026-02-16 10:02:47.381420592 +0000 UTC m=+1025.074576772 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" (UID: "a9e0b3a6-0817-4c54-acf5-11145e9e0dab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.382196 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.387959 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2"] Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.389988 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.393922 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hx8s5" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.394123 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.394221 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.416611 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.430607 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.468946 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl"] Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.470710 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.477150 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-r98s7" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.481453 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.481526 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.481666 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss6zk\" (UniqueName: \"kubernetes.io/projected/c2b42d7c-69c1-4052-910f-a174001cc739-kube-api-access-ss6zk\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.489532 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl"] Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.583034 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss6zk\" (UniqueName: \"kubernetes.io/projected/c2b42d7c-69c1-4052-910f-a174001cc739-kube-api-access-ss6zk\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.583130 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.583163 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.583207 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98z7\" (UniqueName: \"kubernetes.io/projected/1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a-kube-api-access-v98z7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6lhcl\" (UID: \"1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" Feb 16 10:02:46 crc kubenswrapper[4814]: E0216 10:02:46.583750 4814 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 10:02:46 crc kubenswrapper[4814]: E0216 10:02:46.583802 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs podName:c2b42d7c-69c1-4052-910f-a174001cc739 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:47.083781914 +0000 UTC m=+1024.776938094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs") pod "openstack-operator-controller-manager-5c6596c9fc-2tsm2" (UID: "c2b42d7c-69c1-4052-910f-a174001cc739") : secret "metrics-server-cert" not found Feb 16 10:02:46 crc kubenswrapper[4814]: E0216 10:02:46.583941 4814 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 10:02:46 crc kubenswrapper[4814]: E0216 10:02:46.584070 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs podName:c2b42d7c-69c1-4052-910f-a174001cc739 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:47.084034381 +0000 UTC m=+1024.777190561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs") pod "openstack-operator-controller-manager-5c6596c9fc-2tsm2" (UID: "c2b42d7c-69c1-4052-910f-a174001cc739") : secret "webhook-server-cert" not found Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.628201 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss6zk\" (UniqueName: \"kubernetes.io/projected/c2b42d7c-69c1-4052-910f-a174001cc739-kube-api-access-ss6zk\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.685236 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98z7\" (UniqueName: \"kubernetes.io/projected/1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a-kube-api-access-v98z7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6lhcl\" (UID: \"1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.786805 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert\") pod \"infra-operator-controller-manager-79d975b745-5fwts\" (UID: \"cd61e4fa-ce01-4597-9f4c-e90419b3c582\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:02:46 crc kubenswrapper[4814]: E0216 10:02:46.787026 4814 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 10:02:46 crc kubenswrapper[4814]: E0216 10:02:46.787144 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert podName:cd61e4fa-ce01-4597-9f4c-e90419b3c582 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:48.787112533 +0000 UTC m=+1026.480268713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert") pod "infra-operator-controller-manager-79d975b745-5fwts" (UID: "cd61e4fa-ce01-4597-9f4c-e90419b3c582") : secret "infra-operator-webhook-server-cert" not found Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.851825 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98z7\" (UniqueName: \"kubernetes.io/projected/1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a-kube-api-access-v98z7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6lhcl\" (UID: \"1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" Feb 16 10:02:46 crc kubenswrapper[4814]: I0216 10:02:46.855345 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" Feb 16 10:02:47 crc kubenswrapper[4814]: I0216 10:02:47.104246 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:47 crc kubenswrapper[4814]: I0216 10:02:47.104713 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:47 crc kubenswrapper[4814]: E0216 10:02:47.104908 4814 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 10:02:47 crc kubenswrapper[4814]: E0216 10:02:47.104980 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs podName:c2b42d7c-69c1-4052-910f-a174001cc739 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:48.104958069 +0000 UTC m=+1025.798114249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs") pod "openstack-operator-controller-manager-5c6596c9fc-2tsm2" (UID: "c2b42d7c-69c1-4052-910f-a174001cc739") : secret "webhook-server-cert" not found Feb 16 10:02:47 crc kubenswrapper[4814]: E0216 10:02:47.106879 4814 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 10:02:47 crc kubenswrapper[4814]: E0216 10:02:47.106926 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs podName:c2b42d7c-69c1-4052-910f-a174001cc739 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:48.106916184 +0000 UTC m=+1025.800072364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs") pod "openstack-operator-controller-manager-5c6596c9fc-2tsm2" (UID: "c2b42d7c-69c1-4052-910f-a174001cc739") : secret "metrics-server-cert" not found Feb 16 10:02:47 crc kubenswrapper[4814]: I0216 10:02:47.419635 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:02:47 crc kubenswrapper[4814]: E0216 10:02:47.419935 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:02:47 crc kubenswrapper[4814]: E0216 10:02:47.420003 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert podName:a9e0b3a6-0817-4c54-acf5-11145e9e0dab nodeName:}" failed. No retries permitted until 2026-02-16 10:02:49.419984066 +0000 UTC m=+1027.113140246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" (UID: "a9e0b3a6-0817-4c54-acf5-11145e9e0dab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.142472 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.142554 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.142791 4814 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.142863 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs podName:c2b42d7c-69c1-4052-910f-a174001cc739 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:50.142842573 +0000 UTC m=+1027.835998753 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs") pod "openstack-operator-controller-manager-5c6596c9fc-2tsm2" (UID: "c2b42d7c-69c1-4052-910f-a174001cc739") : secret "webhook-server-cert" not found Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.142854 4814 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.142966 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs podName:c2b42d7c-69c1-4052-910f-a174001cc739 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:50.142936036 +0000 UTC m=+1027.836092216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs") pod "openstack-operator-controller-manager-5c6596c9fc-2tsm2" (UID: "c2b42d7c-69c1-4052-910f-a174001cc739") : secret "metrics-server-cert" not found Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.260223 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45"] Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.286450 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-kmskc"] Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.385745 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv"] Feb 16 10:02:48 crc kubenswrapper[4814]: W0216 10:02:48.386977 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode720ed93_e990_4508_ad82_cd7c7d097e9c.slice/crio-40e1883a4422b9be5d5f35ac0a386d84217d9ac6b72537a497eacc876537e191 WatchSource:0}: Error finding container 40e1883a4422b9be5d5f35ac0a386d84217d9ac6b72537a497eacc876537e191: Status 404 returned error can't find the container with id 40e1883a4422b9be5d5f35ac0a386d84217d9ac6b72537a497eacc876537e191 Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.403429 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x"] Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.420142 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp"] Feb 16 10:02:48 crc kubenswrapper[4814]: W0216 10:02:48.427003 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode763fa22_f350_4b3c_930e_f115981b2cd5.slice/crio-6ea66452f2ad75874949203c3b115037af8bfbc606d8be90f40e05d55d0049d4 WatchSource:0}: Error finding container 6ea66452f2ad75874949203c3b115037af8bfbc606d8be90f40e05d55d0049d4: Status 404 returned error can't find the container with id 6ea66452f2ad75874949203c3b115037af8bfbc606d8be90f40e05d55d0049d4 Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.434429 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b"] Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.564341 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" event={"ID":"e763fa22-f350-4b3c-930e-f115981b2cd5","Type":"ContainerStarted","Data":"6ea66452f2ad75874949203c3b115037af8bfbc606d8be90f40e05d55d0049d4"} Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.568655 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45" event={"ID":"2e5a39bc-3922-4dfd-b2c9-5ff4ebbeeb74","Type":"ContainerStarted","Data":"1a49c8f29824a9f52d055e88be5247682e1b81cbda78c75dac98da473fa2bb4b"} Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.570954 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" event={"ID":"5dce01de-2987-428e-8e82-916685ec38d0","Type":"ContainerStarted","Data":"154a3af6c1afa632f581ec354ad2563e96d79f752e3e609e7810c1c4882b167d"} Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.574462 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" event={"ID":"0808e383-92fc-4af4-82c1-7324a6729e7a","Type":"ContainerStarted","Data":"2f93e34832a28f5a6822b9dec8c4c43b932e352773229ae0ad8b6051e996e2a6"} Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.581051 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x" event={"ID":"96b8a99b-83ce-4d62-b471-a8bcc47aa67a","Type":"ContainerStarted","Data":"a277d11abf26a701e89560c4b42596b88656ecdc7d1464fbfa15710586a1f865"} Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.584547 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv" event={"ID":"e720ed93-e990-4508-ad82-cd7c7d097e9c","Type":"ContainerStarted","Data":"40e1883a4422b9be5d5f35ac0a386d84217d9ac6b72537a497eacc876537e191"} Feb 16 10:02:48 crc kubenswrapper[4814]: W0216 10:02:48.815472 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6383f25_e9d4_4606_aa4a_fd1ed2b9299c.slice/crio-e2994770c2aafeaa2626cec8803ff3ba76c89eb6aa8170a138ef1f3317578ae9 WatchSource:0}: Error finding container e2994770c2aafeaa2626cec8803ff3ba76c89eb6aa8170a138ef1f3317578ae9: Status 404 returned error can't find the container with id e2994770c2aafeaa2626cec8803ff3ba76c89eb6aa8170a138ef1f3317578ae9 Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.818992 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm"] Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.849656 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9"] Feb 16 10:02:48 crc kubenswrapper[4814]: W0216 10:02:48.861481 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e3cc780_e5be_4808_b9c3_d514994ce8cb.slice/crio-12155afd58ade3e9433a7fdd64f2bb00ea2fb98597eb219dbf7a17ccfa1132fe WatchSource:0}: Error finding container 12155afd58ade3e9433a7fdd64f2bb00ea2fb98597eb219dbf7a17ccfa1132fe: Status 404 returned error can't find the container with id 12155afd58ade3e9433a7fdd64f2bb00ea2fb98597eb219dbf7a17ccfa1132fe Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.869594 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq"] Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.875077 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert\") pod \"infra-operator-controller-manager-79d975b745-5fwts\" (UID: \"cd61e4fa-ce01-4597-9f4c-e90419b3c582\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.875373 4814 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.875491 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert podName:cd61e4fa-ce01-4597-9f4c-e90419b3c582 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:52.875456456 +0000 UTC m=+1030.568612696 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert") pod "infra-operator-controller-manager-79d975b745-5fwts" (UID: "cd61e4fa-ce01-4597-9f4c-e90419b3c582") : secret "infra-operator-webhook-server-cert" not found Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.879033 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v"] Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.885802 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md"] Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.900131 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr"] Feb 16 10:02:48 crc kubenswrapper[4814]: W0216 10:02:48.901815 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaa14470_c664_49a4_88f4_d48c9c2f7eda.slice/crio-dc2ef44d9b574847ac2b65141faf278c6f597b711f7182d1a5f4c981b30c9364 WatchSource:0}: Error finding container dc2ef44d9b574847ac2b65141faf278c6f597b711f7182d1a5f4c981b30c9364: Status 404 returned error can't find the container with id dc2ef44d9b574847ac2b65141faf278c6f597b711f7182d1a5f4c981b30c9364 Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.915604 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7dl25"] Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.924892 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb"] Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.938963 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2"] Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.948556 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf"] Feb 16 10:02:48 crc kubenswrapper[4814]: W0216 10:02:48.952434 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a2d26bf_3be8_48a8_845d_ea10f5196876.slice/crio-30ffcd4fa2d438daea3855065c746f09769ec40f2709ec46eaae4ab40a477fcf WatchSource:0}: Error finding container 30ffcd4fa2d438daea3855065c746f09769ec40f2709ec46eaae4ab40a477fcf: Status 404 returned error can't find the container with id 30ffcd4fa2d438daea3855065c746f09769ec40f2709ec46eaae4ab40a477fcf Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.952475 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hg42b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-qbxxf_openstack-operators(27612122-6b3e-468c-9050-ff180e9212d8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.953648 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" podUID="27612122-6b3e-468c-9050-ff180e9212d8" Feb 16 10:02:48 crc kubenswrapper[4814]: W0216 10:02:48.954824 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc436a9b9_dacb_4c82_b799_117453b8c695.slice/crio-4c1e916d9d49d45b32f1c7b14c0c7248d067985c553ce8b1c0b254d4a1090bb8 WatchSource:0}: Error finding container 4c1e916d9d49d45b32f1c7b14c0c7248d067985c553ce8b1c0b254d4a1090bb8: Status 404 returned error can't find the container with id 4c1e916d9d49d45b32f1c7b14c0c7248d067985c553ce8b1c0b254d4a1090bb8 Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.955950 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h"] Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.956602 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5kg6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-7dl25_openstack-operators(3a2d26bf-3be8-48a8-845d-ea10f5196876): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 10:02:48 crc kubenswrapper[4814]: W0216 10:02:48.957474 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12f8611d_0069_4ea0_a926_3f7c34ac5424.slice/crio-fe27daa0b36c36475f364f4cf005ea3fc233fc089e469756472cb1f945d4f6e7 WatchSource:0}: Error finding container fe27daa0b36c36475f364f4cf005ea3fc233fc089e469756472cb1f945d4f6e7: Status 404 returned error can't find the container with id fe27daa0b36c36475f364f4cf005ea3fc233fc089e469756472cb1f945d4f6e7 Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.957760 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" podUID="3a2d26bf-3be8-48a8-845d-ea10f5196876" Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.957982 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.164:5001/openstack-k8s-operators/watcher-operator:44079e296ffd2d2bcf81505d781ced08bea9022f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vp295,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7787dfc59c-cx6k2_openstack-operators(c436a9b9-dacb-4c82-b799-117453b8c695): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.959100 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" podUID="c436a9b9-dacb-4c82-b799-117453b8c695" Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.962268 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v98z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6lhcl_openstack-operators(1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.962331 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qxbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-5pd8h_openstack-operators(12f8611d-0069-4ea0-a926-3f7c34ac5424): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 10:02:48 crc kubenswrapper[4814]: W0216 10:02:48.962409 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfea081c6_407f_4dd4_958f_0d567d0df233.slice/crio-1a3c3cf2ec6cc08e270049ec1c1470b32b4b160a540cdb8d8b1a6e8bb10b6562 WatchSource:0}: Error finding container 1a3c3cf2ec6cc08e270049ec1c1470b32b4b160a540cdb8d8b1a6e8bb10b6562: Status 404 returned error can't find the container with id 1a3c3cf2ec6cc08e270049ec1c1470b32b4b160a540cdb8d8b1a6e8bb10b6562 Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.963524 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" podUID="1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a" Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.963636 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" podUID="12f8611d-0069-4ea0-a926-3f7c34ac5424" Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.966607 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pxkj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-sl5wn_openstack-operators(fea081c6-407f-4dd4-958f-0d567d0df233): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.967783 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp"] Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.967881 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" podUID="fea081c6-407f-4dd4-958f-0d567d0df233" Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.981781 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl"] Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.983949 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkzv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-rtsgp_openstack-operators(e9d0d20b-f520-4a52-93d5-02fa13273625): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 10:02:48 crc kubenswrapper[4814]: E0216 10:02:48.985661 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" podUID="e9d0d20b-f520-4a52-93d5-02fa13273625" Feb 16 10:02:48 crc kubenswrapper[4814]: I0216 10:02:48.992151 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn"] Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.487839 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:02:49 crc kubenswrapper[4814]: E0216 10:02:49.488030 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:02:49 crc kubenswrapper[4814]: E0216 10:02:49.488625 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert podName:a9e0b3a6-0817-4c54-acf5-11145e9e0dab nodeName:}" failed. No retries permitted until 2026-02-16 10:02:53.488592812 +0000 UTC m=+1031.181748982 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" (UID: "a9e0b3a6-0817-4c54-acf5-11145e9e0dab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.605359 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" event={"ID":"27612122-6b3e-468c-9050-ff180e9212d8","Type":"ContainerStarted","Data":"ea391c16b08a146860bc2a2908a5634ccfd46dd7032160b6ea258975066a2521"} Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.608909 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" event={"ID":"12f8611d-0069-4ea0-a926-3f7c34ac5424","Type":"ContainerStarted","Data":"fe27daa0b36c36475f364f4cf005ea3fc233fc089e469756472cb1f945d4f6e7"} Feb 16 10:02:49 crc kubenswrapper[4814]: E0216 10:02:49.611134 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" podUID="27612122-6b3e-468c-9050-ff180e9212d8" Feb 16 10:02:49 crc kubenswrapper[4814]: E0216 10:02:49.613957 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" podUID="12f8611d-0069-4ea0-a926-3f7c34ac5424" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.614004 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" event={"ID":"fea081c6-407f-4dd4-958f-0d567d0df233","Type":"ContainerStarted","Data":"1a3c3cf2ec6cc08e270049ec1c1470b32b4b160a540cdb8d8b1a6e8bb10b6562"} Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.623343 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md" event={"ID":"2d17d4ba-3b70-4b99-808c-a9fb764754a4","Type":"ContainerStarted","Data":"228a9ca389dd547e50ec3b286c9f408d74050c95b38074f928973dce27a0a6fa"} Feb 16 10:02:49 crc kubenswrapper[4814]: E0216 10:02:49.623582 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" podUID="fea081c6-407f-4dd4-958f-0d567d0df233" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.626898 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-knqjf"] Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.629097 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" event={"ID":"2ffba7b1-f1c7-4422-bbd2-240022e594a9","Type":"ContainerStarted","Data":"b1344d545a78a496f8902a31c1c3bbbd00f3aec27751e827156ecf2866ec41c7"} Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.629256 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.630478 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" event={"ID":"e9d0d20b-f520-4a52-93d5-02fa13273625","Type":"ContainerStarted","Data":"18c7bfaf3404867340546b8744970e3ce4cc02572bb84a736c6832235a021de3"} Feb 16 10:02:49 crc kubenswrapper[4814]: E0216 10:02:49.636197 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" podUID="e9d0d20b-f520-4a52-93d5-02fa13273625" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.636704 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" event={"ID":"d6383f25-e9d4-4606-aa4a-fd1ed2b9299c","Type":"ContainerStarted","Data":"e2994770c2aafeaa2626cec8803ff3ba76c89eb6aa8170a138ef1f3317578ae9"} Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.641401 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-knqjf"] Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.664378 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" event={"ID":"57a9e823-2475-4a15-9ac0-1cd8b4f0197c","Type":"ContainerStarted","Data":"4942b8b0546329d3c810cbf59115e0491a79098e662a5dbb5a432aa36d10b03d"} Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.686198 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" event={"ID":"c436a9b9-dacb-4c82-b799-117453b8c695","Type":"ContainerStarted","Data":"4c1e916d9d49d45b32f1c7b14c0c7248d067985c553ce8b1c0b254d4a1090bb8"} Feb 16 10:02:49 crc kubenswrapper[4814]: E0216 10:02:49.689062 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/openstack-k8s-operators/watcher-operator:44079e296ffd2d2bcf81505d781ced08bea9022f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" podUID="c436a9b9-dacb-4c82-b799-117453b8c695" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.691094 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6jzd\" (UniqueName: \"kubernetes.io/projected/7316f40f-3e42-4198-89d0-a702aedc3ddc-kube-api-access-h6jzd\") pod \"community-operators-knqjf\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.691153 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-catalog-content\") pod \"community-operators-knqjf\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.691208 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" event={"ID":"7282bc18-ffbd-4680-abb9-40dbe56ad895","Type":"ContainerStarted","Data":"6b11df82256d3b1942b2d81396671bb3517bc1e09edb6267403923577e8fa53e"} Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.691235 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-utilities\") pod \"community-operators-knqjf\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.695174 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq" event={"ID":"aaa14470-c664-49a4-88f4-d48c9c2f7eda","Type":"ContainerStarted","Data":"dc2ef44d9b574847ac2b65141faf278c6f597b711f7182d1a5f4c981b30c9364"} Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.702020 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" event={"ID":"1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a","Type":"ContainerStarted","Data":"bb6e03110ee2d7bc61bc4116cc68ba8201bb78c4345bb03bd63d440e725a3080"} Feb 16 10:02:49 crc kubenswrapper[4814]: E0216 10:02:49.714808 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" podUID="1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.733781 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm" event={"ID":"0e3cc780-e5be-4808-b9c3-d514994ce8cb","Type":"ContainerStarted","Data":"12155afd58ade3e9433a7fdd64f2bb00ea2fb98597eb219dbf7a17ccfa1132fe"} Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.736892 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" event={"ID":"3a2d26bf-3be8-48a8-845d-ea10f5196876","Type":"ContainerStarted","Data":"30ffcd4fa2d438daea3855065c746f09769ec40f2709ec46eaae4ab40a477fcf"} Feb 16 10:02:49 crc kubenswrapper[4814]: E0216 10:02:49.743676 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" podUID="3a2d26bf-3be8-48a8-845d-ea10f5196876" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.795959 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-utilities\") pod \"community-operators-knqjf\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.796147 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6jzd\" (UniqueName: \"kubernetes.io/projected/7316f40f-3e42-4198-89d0-a702aedc3ddc-kube-api-access-h6jzd\") pod \"community-operators-knqjf\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.796213 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-catalog-content\") pod \"community-operators-knqjf\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.796785 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-catalog-content\") pod \"community-operators-knqjf\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.797971 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-utilities\") pod \"community-operators-knqjf\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.861835 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6jzd\" (UniqueName: \"kubernetes.io/projected/7316f40f-3e42-4198-89d0-a702aedc3ddc-kube-api-access-h6jzd\") pod \"community-operators-knqjf\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:02:49 crc kubenswrapper[4814]: I0216 10:02:49.973933 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:02:50 crc kubenswrapper[4814]: I0216 10:02:50.209027 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:50 crc kubenswrapper[4814]: I0216 10:02:50.209184 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:50 crc kubenswrapper[4814]: E0216 10:02:50.209331 4814 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 10:02:50 crc kubenswrapper[4814]: E0216 10:02:50.209403 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs podName:c2b42d7c-69c1-4052-910f-a174001cc739 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:54.209383381 +0000 UTC m=+1031.902539561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs") pod "openstack-operator-controller-manager-5c6596c9fc-2tsm2" (UID: "c2b42d7c-69c1-4052-910f-a174001cc739") : secret "metrics-server-cert" not found Feb 16 10:02:50 crc kubenswrapper[4814]: E0216 10:02:50.209954 4814 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 10:02:50 crc kubenswrapper[4814]: E0216 10:02:50.210118 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs podName:c2b42d7c-69c1-4052-910f-a174001cc739 nodeName:}" failed. No retries permitted until 2026-02-16 10:02:54.210084722 +0000 UTC m=+1031.903240902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs") pod "openstack-operator-controller-manager-5c6596c9fc-2tsm2" (UID: "c2b42d7c-69c1-4052-910f-a174001cc739") : secret "webhook-server-cert" not found Feb 16 10:02:50 crc kubenswrapper[4814]: I0216 10:02:50.697725 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-knqjf"] Feb 16 10:02:50 crc kubenswrapper[4814]: W0216 10:02:50.747898 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7316f40f_3e42_4198_89d0_a702aedc3ddc.slice/crio-fc9b6ab4d32684025c2cd44e8f005c2b2cd0fe5f0e7d36744f74e1443fa369bf WatchSource:0}: Error finding container fc9b6ab4d32684025c2cd44e8f005c2b2cd0fe5f0e7d36744f74e1443fa369bf: Status 404 returned error can't find the container with id fc9b6ab4d32684025c2cd44e8f005c2b2cd0fe5f0e7d36744f74e1443fa369bf Feb 16 10:02:50 crc kubenswrapper[4814]: E0216 10:02:50.762329 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" podUID="fea081c6-407f-4dd4-958f-0d567d0df233" Feb 16 10:02:50 crc kubenswrapper[4814]: E0216 10:02:50.763219 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" podUID="e9d0d20b-f520-4a52-93d5-02fa13273625" Feb 16 10:02:50 crc kubenswrapper[4814]: E0216 10:02:50.763298 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" podUID="27612122-6b3e-468c-9050-ff180e9212d8" Feb 16 10:02:50 crc kubenswrapper[4814]: E0216 10:02:50.763387 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/openstack-k8s-operators/watcher-operator:44079e296ffd2d2bcf81505d781ced08bea9022f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" podUID="c436a9b9-dacb-4c82-b799-117453b8c695" Feb 16 10:02:50 crc kubenswrapper[4814]: E0216 10:02:50.763522 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" podUID="12f8611d-0069-4ea0-a926-3f7c34ac5424" Feb 16 10:02:50 crc kubenswrapper[4814]: E0216 10:02:50.763626 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" podUID="1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a" Feb 16 10:02:50 crc kubenswrapper[4814]: E0216 10:02:50.763679 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" podUID="3a2d26bf-3be8-48a8-845d-ea10f5196876" Feb 16 10:02:51 crc kubenswrapper[4814]: I0216 10:02:51.825619 4814 generic.go:334] "Generic (PLEG): container finished" podID="7316f40f-3e42-4198-89d0-a702aedc3ddc" containerID="716efeecdd0f61d584f6280750b3bb6d8952d86ba8e3cf4c71007403f0c78892" exitCode=0 Feb 16 10:02:51 crc kubenswrapper[4814]: I0216 10:02:51.825797 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knqjf" event={"ID":"7316f40f-3e42-4198-89d0-a702aedc3ddc","Type":"ContainerDied","Data":"716efeecdd0f61d584f6280750b3bb6d8952d86ba8e3cf4c71007403f0c78892"} Feb 16 10:02:51 crc kubenswrapper[4814]: I0216 10:02:51.826106 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knqjf" event={"ID":"7316f40f-3e42-4198-89d0-a702aedc3ddc","Type":"ContainerStarted","Data":"fc9b6ab4d32684025c2cd44e8f005c2b2cd0fe5f0e7d36744f74e1443fa369bf"} Feb 16 10:02:52 crc kubenswrapper[4814]: I0216 10:02:52.926455 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert\") pod \"infra-operator-controller-manager-79d975b745-5fwts\" (UID: \"cd61e4fa-ce01-4597-9f4c-e90419b3c582\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:02:52 crc kubenswrapper[4814]: E0216 10:02:52.926743 4814 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 10:02:52 crc kubenswrapper[4814]: E0216 10:02:52.926859 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert podName:cd61e4fa-ce01-4597-9f4c-e90419b3c582 nodeName:}" failed. No retries permitted until 2026-02-16 10:03:00.92682917 +0000 UTC m=+1038.619985350 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert") pod "infra-operator-controller-manager-79d975b745-5fwts" (UID: "cd61e4fa-ce01-4597-9f4c-e90419b3c582") : secret "infra-operator-webhook-server-cert" not found Feb 16 10:02:53 crc kubenswrapper[4814]: I0216 10:02:53.429706 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 10:02:53 crc kubenswrapper[4814]: I0216 10:02:53.543154 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:02:53 crc kubenswrapper[4814]: E0216 10:02:53.543404 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:02:53 crc kubenswrapper[4814]: E0216 10:02:53.543501 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert podName:a9e0b3a6-0817-4c54-acf5-11145e9e0dab nodeName:}" failed. No retries permitted until 2026-02-16 10:03:01.543475896 +0000 UTC m=+1039.236632076 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" (UID: "a9e0b3a6-0817-4c54-acf5-11145e9e0dab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:02:54 crc kubenswrapper[4814]: I0216 10:02:54.256844 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:54 crc kubenswrapper[4814]: I0216 10:02:54.257526 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:02:54 crc kubenswrapper[4814]: E0216 10:02:54.257124 4814 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 10:02:54 crc kubenswrapper[4814]: E0216 10:02:54.257684 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs podName:c2b42d7c-69c1-4052-910f-a174001cc739 nodeName:}" failed. No retries permitted until 2026-02-16 10:03:02.257646528 +0000 UTC m=+1039.950802768 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs") pod "openstack-operator-controller-manager-5c6596c9fc-2tsm2" (UID: "c2b42d7c-69c1-4052-910f-a174001cc739") : secret "metrics-server-cert" not found Feb 16 10:02:54 crc kubenswrapper[4814]: E0216 10:02:54.257860 4814 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 10:02:54 crc kubenswrapper[4814]: E0216 10:02:54.258000 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs podName:c2b42d7c-69c1-4052-910f-a174001cc739 nodeName:}" failed. No retries permitted until 2026-02-16 10:03:02.257937117 +0000 UTC m=+1039.951093377 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs") pod "openstack-operator-controller-manager-5c6596c9fc-2tsm2" (UID: "c2b42d7c-69c1-4052-910f-a174001cc739") : secret "webhook-server-cert" not found Feb 16 10:03:00 crc kubenswrapper[4814]: I0216 10:03:00.969953 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert\") pod \"infra-operator-controller-manager-79d975b745-5fwts\" (UID: \"cd61e4fa-ce01-4597-9f4c-e90419b3c582\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:03:00 crc kubenswrapper[4814]: I0216 10:03:00.981947 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd61e4fa-ce01-4597-9f4c-e90419b3c582-cert\") pod \"infra-operator-controller-manager-79d975b745-5fwts\" (UID: \"cd61e4fa-ce01-4597-9f4c-e90419b3c582\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:03:01 crc kubenswrapper[4814]: I0216 10:03:01.040002 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:03:01 crc kubenswrapper[4814]: E0216 10:03:01.434225 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 16 10:03:01 crc kubenswrapper[4814]: E0216 10:03:01.434462 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7n8th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-kmskc_openstack-operators(5dce01de-2987-428e-8e82-916685ec38d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:03:01 crc kubenswrapper[4814]: E0216 10:03:01.435698 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" podUID="5dce01de-2987-428e-8e82-916685ec38d0" Feb 16 10:03:01 crc kubenswrapper[4814]: I0216 10:03:01.587285 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:03:01 crc kubenswrapper[4814]: E0216 10:03:01.587463 4814 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:03:01 crc kubenswrapper[4814]: E0216 10:03:01.587524 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert podName:a9e0b3a6-0817-4c54-acf5-11145e9e0dab nodeName:}" failed. No retries permitted until 2026-02-16 10:03:17.587504051 +0000 UTC m=+1055.280660231 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" (UID: "a9e0b3a6-0817-4c54-acf5-11145e9e0dab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 10:03:01 crc kubenswrapper[4814]: E0216 10:03:01.942739 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" podUID="5dce01de-2987-428e-8e82-916685ec38d0" Feb 16 10:03:02 crc kubenswrapper[4814]: E0216 10:03:02.293816 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 16 10:03:02 crc kubenswrapper[4814]: E0216 10:03:02.294157 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dq9fk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-mscb9_openstack-operators(d6383f25-e9d4-4606-aa4a-fd1ed2b9299c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:03:02 crc kubenswrapper[4814]: E0216 10:03:02.295677 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" podUID="d6383f25-e9d4-4606-aa4a-fd1ed2b9299c" Feb 16 10:03:02 crc kubenswrapper[4814]: I0216 10:03:02.303368 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:03:02 crc kubenswrapper[4814]: I0216 10:03:02.303458 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:03:02 crc kubenswrapper[4814]: I0216 10:03:02.310001 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-metrics-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:03:02 crc kubenswrapper[4814]: I0216 10:03:02.310884 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c2b42d7c-69c1-4052-910f-a174001cc739-webhook-certs\") pod \"openstack-operator-controller-manager-5c6596c9fc-2tsm2\" (UID: \"c2b42d7c-69c1-4052-910f-a174001cc739\") " pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:03:02 crc kubenswrapper[4814]: I0216 10:03:02.344051 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:03:02 crc kubenswrapper[4814]: E0216 10:03:02.950952 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" podUID="d6383f25-e9d4-4606-aa4a-fd1ed2b9299c" Feb 16 10:03:03 crc kubenswrapper[4814]: E0216 10:03:03.181706 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 16 10:03:03 crc kubenswrapper[4814]: E0216 10:03:03.181942 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jlc27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-mrqpp_openstack-operators(e763fa22-f350-4b3c-930e-f115981b2cd5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:03:03 crc kubenswrapper[4814]: E0216 10:03:03.183377 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" podUID="e763fa22-f350-4b3c-930e-f115981b2cd5" Feb 16 10:03:03 crc kubenswrapper[4814]: E0216 10:03:03.958587 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" podUID="e763fa22-f350-4b3c-930e-f115981b2cd5" Feb 16 10:03:06 crc kubenswrapper[4814]: E0216 10:03:06.383596 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 16 10:03:06 crc kubenswrapper[4814]: E0216 10:03:06.383982 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vhnxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-9ltsr_openstack-operators(2ffba7b1-f1c7-4422-bbd2-240022e594a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:03:06 crc kubenswrapper[4814]: E0216 10:03:06.385296 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" podUID="2ffba7b1-f1c7-4422-bbd2-240022e594a9" Feb 16 10:03:06 crc kubenswrapper[4814]: E0216 10:03:06.997714 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" podUID="2ffba7b1-f1c7-4422-bbd2-240022e594a9" Feb 16 10:03:07 crc kubenswrapper[4814]: E0216 10:03:07.859081 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 16 10:03:07 crc kubenswrapper[4814]: E0216 10:03:07.859920 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jpsb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-h5w4b_openstack-operators(0808e383-92fc-4af4-82c1-7324a6729e7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:03:07 crc kubenswrapper[4814]: E0216 10:03:07.861205 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" podUID="0808e383-92fc-4af4-82c1-7324a6729e7a" Feb 16 10:03:07 crc kubenswrapper[4814]: I0216 10:03:07.960276 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:03:07 crc kubenswrapper[4814]: I0216 10:03:07.960386 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:03:07 crc kubenswrapper[4814]: I0216 10:03:07.960457 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 10:03:07 crc kubenswrapper[4814]: I0216 10:03:07.961613 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b20fcb56d62b3faba2758b4da10c035a51c1093d8bbea8f8006bcade37f9f53"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 10:03:07 crc kubenswrapper[4814]: I0216 10:03:07.961692 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://5b20fcb56d62b3faba2758b4da10c035a51c1093d8bbea8f8006bcade37f9f53" gracePeriod=600 Feb 16 10:03:08 crc kubenswrapper[4814]: E0216 10:03:08.009428 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" podUID="0808e383-92fc-4af4-82c1-7324a6729e7a" Feb 16 10:03:08 crc kubenswrapper[4814]: E0216 10:03:08.495043 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" Feb 16 10:03:08 crc kubenswrapper[4814]: E0216 10:03:08.495379 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pb648,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-f9l2v_openstack-operators(57a9e823-2475-4a15-9ac0-1cd8b4f0197c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:03:08 crc kubenswrapper[4814]: E0216 10:03:08.496741 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" podUID="57a9e823-2475-4a15-9ac0-1cd8b4f0197c" Feb 16 10:03:09 crc kubenswrapper[4814]: I0216 10:03:09.018350 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="5b20fcb56d62b3faba2758b4da10c035a51c1093d8bbea8f8006bcade37f9f53" exitCode=0 Feb 16 10:03:09 crc kubenswrapper[4814]: I0216 10:03:09.018432 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"5b20fcb56d62b3faba2758b4da10c035a51c1093d8bbea8f8006bcade37f9f53"} Feb 16 10:03:09 crc kubenswrapper[4814]: I0216 10:03:09.018509 4814 scope.go:117] "RemoveContainer" containerID="0d06be8c91c3c8023e6f3b4f7a6fc189b666a39a9481db1d4140f47bf92416f2" Feb 16 10:03:09 crc kubenswrapper[4814]: E0216 10:03:09.020649 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" podUID="57a9e823-2475-4a15-9ac0-1cd8b4f0197c" Feb 16 10:03:09 crc kubenswrapper[4814]: E0216 10:03:09.255433 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 10:03:09 crc kubenswrapper[4814]: E0216 10:03:09.255671 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zcn4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-f6jgb_openstack-operators(7282bc18-ffbd-4680-abb9-40dbe56ad895): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:03:09 crc kubenswrapper[4814]: E0216 10:03:09.257758 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" podUID="7282bc18-ffbd-4680-abb9-40dbe56ad895" Feb 16 10:03:10 crc kubenswrapper[4814]: E0216 10:03:10.027070 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" podUID="7282bc18-ffbd-4680-abb9-40dbe56ad895" Feb 16 10:03:16 crc kubenswrapper[4814]: E0216 10:03:16.946646 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 16 10:03:16 crc kubenswrapper[4814]: E0216 10:03:16.947860 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hg42b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-qbxxf_openstack-operators(27612122-6b3e-468c-9050-ff180e9212d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:03:16 crc kubenswrapper[4814]: E0216 10:03:16.949046 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" podUID="27612122-6b3e-468c-9050-ff180e9212d8" Feb 16 10:03:17 crc kubenswrapper[4814]: I0216 10:03:17.664697 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:03:17 crc kubenswrapper[4814]: I0216 10:03:17.674287 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9e0b3a6-0817-4c54-acf5-11145e9e0dab-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz\" (UID: \"a9e0b3a6-0817-4c54-acf5-11145e9e0dab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:03:17 crc kubenswrapper[4814]: I0216 10:03:17.701641 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:03:19 crc kubenswrapper[4814]: E0216 10:03:19.208127 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 16 10:03:19 crc kubenswrapper[4814]: E0216 10:03:19.208389 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pxkj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-sl5wn_openstack-operators(fea081c6-407f-4dd4-958f-0d567d0df233): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:03:19 crc kubenswrapper[4814]: E0216 10:03:19.209623 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" podUID="fea081c6-407f-4dd4-958f-0d567d0df233" Feb 16 10:03:21 crc kubenswrapper[4814]: E0216 10:03:21.437343 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/openstack-k8s-operators/watcher-operator:44079e296ffd2d2bcf81505d781ced08bea9022f" Feb 16 10:03:21 crc kubenswrapper[4814]: E0216 10:03:21.441643 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/openstack-k8s-operators/watcher-operator:44079e296ffd2d2bcf81505d781ced08bea9022f" Feb 16 10:03:21 crc kubenswrapper[4814]: E0216 10:03:21.441770 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.164:5001/openstack-k8s-operators/watcher-operator:44079e296ffd2d2bcf81505d781ced08bea9022f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vp295,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7787dfc59c-cx6k2_openstack-operators(c436a9b9-dacb-4c82-b799-117453b8c695): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:03:21 crc kubenswrapper[4814]: E0216 10:03:21.442998 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" podUID="c436a9b9-dacb-4c82-b799-117453b8c695" Feb 16 10:03:21 crc kubenswrapper[4814]: I0216 10:03:21.650455 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-5fwts"] Feb 16 10:03:22 crc kubenswrapper[4814]: E0216 10:03:22.015874 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 16 10:03:22 crc kubenswrapper[4814]: E0216 10:03:22.016151 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v98z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6lhcl_openstack-operators(1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:03:22 crc kubenswrapper[4814]: E0216 10:03:22.017425 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" podUID="1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a" Feb 16 10:03:22 crc kubenswrapper[4814]: W0216 10:03:22.065825 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd61e4fa_ce01_4597_9f4c_e90419b3c582.slice/crio-b707a9bbbf29032474dc34b22830af829a50bb4c43fff8557a91566737e99c75 WatchSource:0}: Error finding container b707a9bbbf29032474dc34b22830af829a50bb4c43fff8557a91566737e99c75: Status 404 returned error can't find the container with id b707a9bbbf29032474dc34b22830af829a50bb4c43fff8557a91566737e99c75 Feb 16 10:03:22 crc kubenswrapper[4814]: I0216 10:03:22.142610 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" event={"ID":"cd61e4fa-ce01-4597-9f4c-e90419b3c582","Type":"ContainerStarted","Data":"b707a9bbbf29032474dc34b22830af829a50bb4c43fff8557a91566737e99c75"} Feb 16 10:03:22 crc kubenswrapper[4814]: I0216 10:03:22.400342 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2"] Feb 16 10:03:22 crc kubenswrapper[4814]: I0216 10:03:22.980225 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz"] Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.271525 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knqjf" event={"ID":"7316f40f-3e42-4198-89d0-a702aedc3ddc","Type":"ContainerStarted","Data":"0791e6cc3bce89691c213866c983291292c5a8737387e6681613d49bab9e1be1"} Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.296173 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm" event={"ID":"0e3cc780-e5be-4808-b9c3-d514994ce8cb","Type":"ContainerStarted","Data":"6453bad72e583366e8ece6db335d4c371a0d26a532d4defa4c0c3d18bbc41d02"} Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.297323 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.323366 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv" event={"ID":"e720ed93-e990-4508-ad82-cd7c7d097e9c","Type":"ContainerStarted","Data":"31aa0ef8b75603cfef724f74be4dafa5cf8fb0f229681889602396cf210cfbd1"} Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.325688 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.346837 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" event={"ID":"e9d0d20b-f520-4a52-93d5-02fa13273625","Type":"ContainerStarted","Data":"4460f86cf718d22b06bc56ae0a876c9d3199312edafe38af9cea15c6b6aaebfa"} Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.347744 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.364642 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x" event={"ID":"96b8a99b-83ce-4d62-b471-a8bcc47aa67a","Type":"ContainerStarted","Data":"3a9179dfacaf831458f5e995157d25e610ab802709580ee5f546fc74373495bc"} Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.365637 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.367559 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" event={"ID":"a9e0b3a6-0817-4c54-acf5-11145e9e0dab","Type":"ContainerStarted","Data":"661fda8535e8332766b86e91e238a1c58ee32e752b113da38a61789595b26072"} Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.394857 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm" podStartSLOduration=9.704340145 podStartE2EDuration="38.394824486s" podCreationTimestamp="2026-02-16 10:02:45 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.869612641 +0000 UTC m=+1026.562768821" lastFinishedPulling="2026-02-16 10:03:17.560096982 +0000 UTC m=+1055.253253162" observedRunningTime="2026-02-16 10:03:23.381068739 +0000 UTC m=+1061.074224919" watchObservedRunningTime="2026-02-16 10:03:23.394824486 +0000 UTC m=+1061.087980666" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.430375 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md" event={"ID":"2d17d4ba-3b70-4b99-808c-a9fb764754a4","Type":"ContainerStarted","Data":"4cbc3a06cce611d1af70fbe9a4133670e01c3cbbc710c445869e69374130c3a1"} Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.431528 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.435176 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" event={"ID":"c2b42d7c-69c1-4052-910f-a174001cc739","Type":"ContainerStarted","Data":"b3028034331e67bbf26a2ac23b6d89bc9889720c7660067606123cd330e43028"} Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.456751 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" event={"ID":"12f8611d-0069-4ea0-a926-3f7c34ac5424","Type":"ContainerStarted","Data":"ca6b90ae68337b202615c023f9a425abf02cceb57ab8110f204ce44c335c4471"} Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.457062 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.477963 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq" event={"ID":"aaa14470-c664-49a4-88f4-d48c9c2f7eda","Type":"ContainerStarted","Data":"ef6033607598304d1266b2b267719ed97ef02754980168ce0c48c0a37e35a636"} Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.479187 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.483298 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv" podStartSLOduration=10.312927527 podStartE2EDuration="39.483258858s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.39044781 +0000 UTC m=+1026.083604010" lastFinishedPulling="2026-02-16 10:03:17.560779161 +0000 UTC m=+1055.253935341" observedRunningTime="2026-02-16 10:03:23.457942975 +0000 UTC m=+1061.151099175" watchObservedRunningTime="2026-02-16 10:03:23.483258858 +0000 UTC m=+1061.176415038" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.502813 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x" podStartSLOduration=11.057402864 podStartE2EDuration="39.502787668s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.41388783 +0000 UTC m=+1026.107044010" lastFinishedPulling="2026-02-16 10:03:16.859272634 +0000 UTC m=+1054.552428814" observedRunningTime="2026-02-16 10:03:23.49963483 +0000 UTC m=+1061.192791030" watchObservedRunningTime="2026-02-16 10:03:23.502787668 +0000 UTC m=+1061.195943848" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.519969 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"c7db1806bf7a6e5cd75b04a931b3fd46bd321177245f8fbccf4bd3b036932bbf"} Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.541004 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md" podStartSLOduration=11.621725655 podStartE2EDuration="39.540976485s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.949245355 +0000 UTC m=+1026.642401535" lastFinishedPulling="2026-02-16 10:03:16.868496185 +0000 UTC m=+1054.561652365" observedRunningTime="2026-02-16 10:03:23.540830431 +0000 UTC m=+1061.233986791" watchObservedRunningTime="2026-02-16 10:03:23.540976485 +0000 UTC m=+1061.234132665" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.561775 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" podStartSLOduration=6.089975244 podStartE2EDuration="38.56175579s" podCreationTimestamp="2026-02-16 10:02:45 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.96219973 +0000 UTC m=+1026.655355910" lastFinishedPulling="2026-02-16 10:03:21.433980276 +0000 UTC m=+1059.127136456" observedRunningTime="2026-02-16 10:03:23.560954058 +0000 UTC m=+1061.254110238" watchObservedRunningTime="2026-02-16 10:03:23.56175579 +0000 UTC m=+1061.254911970" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.693783 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" podStartSLOduration=6.245715091 podStartE2EDuration="38.693760569s" podCreationTimestamp="2026-02-16 10:02:45 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.983814389 +0000 UTC m=+1026.676970569" lastFinishedPulling="2026-02-16 10:03:21.431859867 +0000 UTC m=+1059.125016047" observedRunningTime="2026-02-16 10:03:23.590123549 +0000 UTC m=+1061.283279749" watchObservedRunningTime="2026-02-16 10:03:23.693760569 +0000 UTC m=+1061.386916749" Feb 16 10:03:23 crc kubenswrapper[4814]: I0216 10:03:23.744619 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq" podStartSLOduration=19.443914752 podStartE2EDuration="39.744583911s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.915652359 +0000 UTC m=+1026.608808539" lastFinishedPulling="2026-02-16 10:03:09.216321518 +0000 UTC m=+1046.909477698" observedRunningTime="2026-02-16 10:03:23.726374549 +0000 UTC m=+1061.419530729" watchObservedRunningTime="2026-02-16 10:03:23.744583911 +0000 UTC m=+1061.437740091" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.567167 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" event={"ID":"d6383f25-e9d4-4606-aa4a-fd1ed2b9299c","Type":"ContainerStarted","Data":"a107ef678518ca927c125340c8aa0c70b4c8e136de1a30b36e28df15cb074102"} Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.568117 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.585062 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" event={"ID":"e763fa22-f350-4b3c-930e-f115981b2cd5","Type":"ContainerStarted","Data":"94e0a0fad3ffbf45cde29666a3d5eb464eb969b588cfecab50df8afccda88dc9"} Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.585551 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.594572 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" event={"ID":"7282bc18-ffbd-4680-abb9-40dbe56ad895","Type":"ContainerStarted","Data":"a9456c6a32c7a37e6e6cb7bc9905dab7b4e81bb3ee9d7d1a5f1df7df6b83d396"} Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.595780 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.603601 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45" event={"ID":"2e5a39bc-3922-4dfd-b2c9-5ff4ebbeeb74","Type":"ContainerStarted","Data":"d9304429d6e092c12494aecba6db2b2f2fb6d4d819ba2d0153d6877dff94a14e"} Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.604604 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.613556 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" event={"ID":"c2b42d7c-69c1-4052-910f-a174001cc739","Type":"ContainerStarted","Data":"b6c65a47bea5e23040a4dcf15434e9853e355545aea6abd5c844d817dadc9336"} Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.614603 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.653192 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" event={"ID":"5dce01de-2987-428e-8e82-916685ec38d0","Type":"ContainerStarted","Data":"8304db2c6a97844a592385ca0be10a55d36153b0a28f926286367bfc1253ea3b"} Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.653619 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.655355 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" podStartSLOduration=7.161742036 podStartE2EDuration="40.655338023s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.824348236 +0000 UTC m=+1026.517504416" lastFinishedPulling="2026-02-16 10:03:22.317944223 +0000 UTC m=+1060.011100403" observedRunningTime="2026-02-16 10:03:24.653087201 +0000 UTC m=+1062.346243391" watchObservedRunningTime="2026-02-16 10:03:24.655338023 +0000 UTC m=+1062.348494203" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.668372 4814 generic.go:334] "Generic (PLEG): container finished" podID="7316f40f-3e42-4198-89d0-a702aedc3ddc" containerID="0791e6cc3bce89691c213866c983291292c5a8737387e6681613d49bab9e1be1" exitCode=0 Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.668524 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knqjf" event={"ID":"7316f40f-3e42-4198-89d0-a702aedc3ddc","Type":"ContainerDied","Data":"0791e6cc3bce89691c213866c983291292c5a8737387e6681613d49bab9e1be1"} Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.718523 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" event={"ID":"3a2d26bf-3be8-48a8-845d-ea10f5196876","Type":"ContainerStarted","Data":"906898397c05911e7e03d353137116fc44f08dbfefd3cd1f892d36ffef142487"} Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.719352 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.731955 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" event={"ID":"2ffba7b1-f1c7-4422-bbd2-240022e594a9","Type":"ContainerStarted","Data":"d58d8e4c1f72ba05ad6889ce59935026b7ca77e773efd7408eca88772267e167"} Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.732615 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.757081 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" podStartSLOduration=7.271128119 podStartE2EDuration="40.75705695s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.950694266 +0000 UTC m=+1026.643850446" lastFinishedPulling="2026-02-16 10:03:22.436623097 +0000 UTC m=+1060.129779277" observedRunningTime="2026-02-16 10:03:24.714211742 +0000 UTC m=+1062.407367932" watchObservedRunningTime="2026-02-16 10:03:24.75705695 +0000 UTC m=+1062.450213130" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.761037 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" event={"ID":"0808e383-92fc-4af4-82c1-7324a6729e7a","Type":"ContainerStarted","Data":"751e4286b44115044ff2f156df7d2b3b1b27062ee2717ccfc6f27596352678eb"} Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.761405 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" Feb 16 10:03:24 crc kubenswrapper[4814]: I0216 10:03:24.776584 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45" podStartSLOduration=12.167206725 podStartE2EDuration="40.77655277s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.250225459 +0000 UTC m=+1025.943381639" lastFinishedPulling="2026-02-16 10:03:16.859571504 +0000 UTC m=+1054.552727684" observedRunningTime="2026-02-16 10:03:24.755866746 +0000 UTC m=+1062.449022926" watchObservedRunningTime="2026-02-16 10:03:24.77655277 +0000 UTC m=+1062.469708950" Feb 16 10:03:25 crc kubenswrapper[4814]: I0216 10:03:25.038223 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" podStartSLOduration=7.162273923 podStartE2EDuration="41.038199232s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.442810896 +0000 UTC m=+1026.135967076" lastFinishedPulling="2026-02-16 10:03:22.318736205 +0000 UTC m=+1060.011892385" observedRunningTime="2026-02-16 10:03:24.833689519 +0000 UTC m=+1062.526845729" watchObservedRunningTime="2026-02-16 10:03:25.038199232 +0000 UTC m=+1062.731355412" Feb 16 10:03:25 crc kubenswrapper[4814]: I0216 10:03:25.040037 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" podStartSLOduration=39.040030474 podStartE2EDuration="39.040030474s" podCreationTimestamp="2026-02-16 10:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:03:25.034657662 +0000 UTC m=+1062.727813852" watchObservedRunningTime="2026-02-16 10:03:25.040030474 +0000 UTC m=+1062.733186654" Feb 16 10:03:25 crc kubenswrapper[4814]: I0216 10:03:25.203906 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" podStartSLOduration=8.038281525 podStartE2EDuration="41.20386549s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.269646126 +0000 UTC m=+1025.962802306" lastFinishedPulling="2026-02-16 10:03:21.435230091 +0000 UTC m=+1059.128386271" observedRunningTime="2026-02-16 10:03:25.168683599 +0000 UTC m=+1062.861839779" watchObservedRunningTime="2026-02-16 10:03:25.20386549 +0000 UTC m=+1062.897021670" Feb 16 10:03:25 crc kubenswrapper[4814]: I0216 10:03:25.313180 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" podStartSLOduration=7.268854656 podStartE2EDuration="41.313140709s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.436628552 +0000 UTC m=+1026.129784732" lastFinishedPulling="2026-02-16 10:03:22.480914605 +0000 UTC m=+1060.174070785" observedRunningTime="2026-02-16 10:03:25.299548796 +0000 UTC m=+1062.992704996" watchObservedRunningTime="2026-02-16 10:03:25.313140709 +0000 UTC m=+1063.006296889" Feb 16 10:03:25 crc kubenswrapper[4814]: I0216 10:03:25.460828 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" podStartSLOduration=8.089060387 podStartE2EDuration="41.46079671s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.946376165 +0000 UTC m=+1026.639532345" lastFinishedPulling="2026-02-16 10:03:22.318112488 +0000 UTC m=+1060.011268668" observedRunningTime="2026-02-16 10:03:25.361830241 +0000 UTC m=+1063.054986421" watchObservedRunningTime="2026-02-16 10:03:25.46079671 +0000 UTC m=+1063.153952900" Feb 16 10:03:25 crc kubenswrapper[4814]: I0216 10:03:25.775312 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" event={"ID":"57a9e823-2475-4a15-9ac0-1cd8b4f0197c","Type":"ContainerStarted","Data":"5812a00092b871875180f29161fdd2e40c226b86df874cffd99a8dd1a94a6df1"} Feb 16 10:03:25 crc kubenswrapper[4814]: I0216 10:03:25.776613 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" Feb 16 10:03:25 crc kubenswrapper[4814]: I0216 10:03:25.795256 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knqjf" event={"ID":"7316f40f-3e42-4198-89d0-a702aedc3ddc","Type":"ContainerStarted","Data":"991edaaeca449f3cea2f70780a395b78c35dcefee9b5eca05fbc0f971dc900b3"} Feb 16 10:03:25 crc kubenswrapper[4814]: I0216 10:03:25.807050 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" podStartSLOduration=7.7795224990000005 podStartE2EDuration="40.807016674s" podCreationTimestamp="2026-02-16 10:02:45 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.956424017 +0000 UTC m=+1026.649580197" lastFinishedPulling="2026-02-16 10:03:21.983918182 +0000 UTC m=+1059.677074372" observedRunningTime="2026-02-16 10:03:25.469553296 +0000 UTC m=+1063.162709476" watchObservedRunningTime="2026-02-16 10:03:25.807016674 +0000 UTC m=+1063.500172854" Feb 16 10:03:25 crc kubenswrapper[4814]: I0216 10:03:25.807605 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" podStartSLOduration=5.068459461 podStartE2EDuration="40.807598811s" podCreationTimestamp="2026-02-16 10:02:45 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.947188717 +0000 UTC m=+1026.640344897" lastFinishedPulling="2026-02-16 10:03:24.686328067 +0000 UTC m=+1062.379484247" observedRunningTime="2026-02-16 10:03:25.798837605 +0000 UTC m=+1063.491993785" watchObservedRunningTime="2026-02-16 10:03:25.807598811 +0000 UTC m=+1063.500754991" Feb 16 10:03:25 crc kubenswrapper[4814]: I0216 10:03:25.885632 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-knqjf" podStartSLOduration=4.954089517 podStartE2EDuration="36.88560879s" podCreationTimestamp="2026-02-16 10:02:49 +0000 UTC" firstStartedPulling="2026-02-16 10:02:53.429040852 +0000 UTC m=+1031.122197032" lastFinishedPulling="2026-02-16 10:03:25.360560135 +0000 UTC m=+1063.053716305" observedRunningTime="2026-02-16 10:03:25.880396833 +0000 UTC m=+1063.573553033" watchObservedRunningTime="2026-02-16 10:03:25.88560879 +0000 UTC m=+1063.578764990" Feb 16 10:03:27 crc kubenswrapper[4814]: E0216 10:03:27.997479 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" podUID="27612122-6b3e-468c-9050-ff180e9212d8" Feb 16 10:03:29 crc kubenswrapper[4814]: I0216 10:03:29.974830 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:03:29 crc kubenswrapper[4814]: I0216 10:03:29.975361 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:03:30 crc kubenswrapper[4814]: I0216 10:03:30.056945 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:03:30 crc kubenswrapper[4814]: I0216 10:03:30.845392 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" event={"ID":"cd61e4fa-ce01-4597-9f4c-e90419b3c582","Type":"ContainerStarted","Data":"faaee739c3687217f42c71eeb533b33e6e41af3f6ac72c0169f994e0da0f7f35"} Feb 16 10:03:30 crc kubenswrapper[4814]: I0216 10:03:30.846217 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:03:30 crc kubenswrapper[4814]: I0216 10:03:30.847663 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" event={"ID":"a9e0b3a6-0817-4c54-acf5-11145e9e0dab","Type":"ContainerStarted","Data":"9cb6cf8a1b55ab268e10518ac472c7af9c7ca94ca56dbe262cc0e10be744223b"} Feb 16 10:03:30 crc kubenswrapper[4814]: I0216 10:03:30.882100 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" podStartSLOduration=38.627032683 podStartE2EDuration="46.882068064s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:03:22.087241363 +0000 UTC m=+1059.780397543" lastFinishedPulling="2026-02-16 10:03:30.342276734 +0000 UTC m=+1068.035432924" observedRunningTime="2026-02-16 10:03:30.869721286 +0000 UTC m=+1068.562877466" watchObservedRunningTime="2026-02-16 10:03:30.882068064 +0000 UTC m=+1068.575224254" Feb 16 10:03:30 crc kubenswrapper[4814]: I0216 10:03:30.906979 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:03:30 crc kubenswrapper[4814]: I0216 10:03:30.912387 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" podStartSLOduration=38.631524077 podStartE2EDuration="45.912362928s" podCreationTimestamp="2026-02-16 10:02:45 +0000 UTC" firstStartedPulling="2026-02-16 10:03:23.056927136 +0000 UTC m=+1060.750083316" lastFinishedPulling="2026-02-16 10:03:30.337765967 +0000 UTC m=+1068.030922167" observedRunningTime="2026-02-16 10:03:30.907247903 +0000 UTC m=+1068.600404093" watchObservedRunningTime="2026-02-16 10:03:30.912362928 +0000 UTC m=+1068.605519108" Feb 16 10:03:30 crc kubenswrapper[4814]: I0216 10:03:30.972076 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-knqjf"] Feb 16 10:03:31 crc kubenswrapper[4814]: I0216 10:03:31.861466 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:03:32 crc kubenswrapper[4814]: I0216 10:03:32.351665 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5c6596c9fc-2tsm2" Feb 16 10:03:32 crc kubenswrapper[4814]: I0216 10:03:32.868988 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-knqjf" podUID="7316f40f-3e42-4198-89d0-a702aedc3ddc" containerName="registry-server" containerID="cri-o://991edaaeca449f3cea2f70780a395b78c35dcefee9b5eca05fbc0f971dc900b3" gracePeriod=2 Feb 16 10:03:33 crc kubenswrapper[4814]: I0216 10:03:33.881689 4814 generic.go:334] "Generic (PLEG): container finished" podID="7316f40f-3e42-4198-89d0-a702aedc3ddc" containerID="991edaaeca449f3cea2f70780a395b78c35dcefee9b5eca05fbc0f971dc900b3" exitCode=0 Feb 16 10:03:33 crc kubenswrapper[4814]: I0216 10:03:33.881784 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knqjf" event={"ID":"7316f40f-3e42-4198-89d0-a702aedc3ddc","Type":"ContainerDied","Data":"991edaaeca449f3cea2f70780a395b78c35dcefee9b5eca05fbc0f971dc900b3"} Feb 16 10:03:33 crc kubenswrapper[4814]: E0216 10:03:33.996138 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" podUID="fea081c6-407f-4dd4-958f-0d567d0df233" Feb 16 10:03:34 crc kubenswrapper[4814]: I0216 10:03:34.963120 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-shv45" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.022674 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-kmskc" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.252859 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-ndn8x" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.368732 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-9ltsr" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.370999 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.380801 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-mrqpp" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.404222 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-dl9md" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.456929 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-mscb9" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.482835 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-catalog-content\") pod \"7316f40f-3e42-4198-89d0-a702aedc3ddc\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.482900 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6jzd\" (UniqueName: \"kubernetes.io/projected/7316f40f-3e42-4198-89d0-a702aedc3ddc-kube-api-access-h6jzd\") pod \"7316f40f-3e42-4198-89d0-a702aedc3ddc\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.483088 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-utilities\") pod \"7316f40f-3e42-4198-89d0-a702aedc3ddc\" (UID: \"7316f40f-3e42-4198-89d0-a702aedc3ddc\") " Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.486479 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-utilities" (OuterVolumeSpecName: "utilities") pod "7316f40f-3e42-4198-89d0-a702aedc3ddc" (UID: "7316f40f-3e42-4198-89d0-a702aedc3ddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.511177 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-f6jgb" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.521017 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7316f40f-3e42-4198-89d0-a702aedc3ddc-kube-api-access-h6jzd" (OuterVolumeSpecName: "kube-api-access-h6jzd") pod "7316f40f-3e42-4198-89d0-a702aedc3ddc" (UID: "7316f40f-3e42-4198-89d0-a702aedc3ddc"). InnerVolumeSpecName "kube-api-access-h6jzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.528118 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-h5w4b" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.538889 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wv8lv" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.586350 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6jzd\" (UniqueName: \"kubernetes.io/projected/7316f40f-3e42-4198-89d0-a702aedc3ddc-kube-api-access-h6jzd\") on node \"crc\" DevicePath \"\"" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.586415 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.621070 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7316f40f-3e42-4198-89d0-a702aedc3ddc" (UID: "7316f40f-3e42-4198-89d0-a702aedc3ddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.688697 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7316f40f-3e42-4198-89d0-a702aedc3ddc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.779910 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-f9l2v" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.905358 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knqjf" event={"ID":"7316f40f-3e42-4198-89d0-a702aedc3ddc","Type":"ContainerDied","Data":"fc9b6ab4d32684025c2cd44e8f005c2b2cd0fe5f0e7d36744f74e1443fa369bf"} Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.905442 4814 scope.go:117] "RemoveContainer" containerID="991edaaeca449f3cea2f70780a395b78c35dcefee9b5eca05fbc0f971dc900b3" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.905501 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knqjf" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.938522 4814 scope.go:117] "RemoveContainer" containerID="0791e6cc3bce89691c213866c983291292c5a8737387e6681613d49bab9e1be1" Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.940082 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-knqjf"] Feb 16 10:03:35 crc kubenswrapper[4814]: I0216 10:03:35.947525 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-knqjf"] Feb 16 10:03:36 crc kubenswrapper[4814]: I0216 10:03:36.458575 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-rtsgp" Feb 16 10:03:36 crc kubenswrapper[4814]: I0216 10:03:36.461679 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-5pd8h" Feb 16 10:03:36 crc kubenswrapper[4814]: I0216 10:03:36.461780 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wh9lm" Feb 16 10:03:36 crc kubenswrapper[4814]: I0216 10:03:36.462742 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-7dl25" Feb 16 10:03:36 crc kubenswrapper[4814]: I0216 10:03:36.462934 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-qstdq" Feb 16 10:03:36 crc kubenswrapper[4814]: I0216 10:03:36.477315 4814 scope.go:117] "RemoveContainer" containerID="716efeecdd0f61d584f6280750b3bb6d8952d86ba8e3cf4c71007403f0c78892" Feb 16 10:03:36 crc kubenswrapper[4814]: E0216 10:03:36.477366 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/openstack-k8s-operators/watcher-operator:44079e296ffd2d2bcf81505d781ced08bea9022f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" podUID="c436a9b9-dacb-4c82-b799-117453b8c695" Feb 16 10:03:36 crc kubenswrapper[4814]: E0216 10:03:36.995436 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" podUID="1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a" Feb 16 10:03:37 crc kubenswrapper[4814]: I0216 10:03:37.005595 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7316f40f-3e42-4198-89d0-a702aedc3ddc" path="/var/lib/kubelet/pods/7316f40f-3e42-4198-89d0-a702aedc3ddc/volumes" Feb 16 10:03:37 crc kubenswrapper[4814]: I0216 10:03:37.708731 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz" Feb 16 10:03:41 crc kubenswrapper[4814]: I0216 10:03:41.046700 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-5fwts" Feb 16 10:03:44 crc kubenswrapper[4814]: I0216 10:03:44.311674 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" event={"ID":"27612122-6b3e-468c-9050-ff180e9212d8","Type":"ContainerStarted","Data":"6157f07d3ce87ac2a72a5ec00546e79e2149634fa94c0c97a0a290be606c0dd9"} Feb 16 10:03:44 crc kubenswrapper[4814]: I0216 10:03:44.312893 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" Feb 16 10:03:44 crc kubenswrapper[4814]: I0216 10:03:44.334238 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" podStartSLOduration=5.466125639 podStartE2EDuration="1m0.334216743s" podCreationTimestamp="2026-02-16 10:02:44 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.952289891 +0000 UTC m=+1026.645446061" lastFinishedPulling="2026-02-16 10:03:43.820380985 +0000 UTC m=+1081.513537165" observedRunningTime="2026-02-16 10:03:44.32915016 +0000 UTC m=+1082.022306340" watchObservedRunningTime="2026-02-16 10:03:44.334216743 +0000 UTC m=+1082.027372923" Feb 16 10:03:49 crc kubenswrapper[4814]: I0216 10:03:49.350026 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" event={"ID":"fea081c6-407f-4dd4-958f-0d567d0df233","Type":"ContainerStarted","Data":"2089f0d8f3cdca36fe43d3ba14485d237fa039d3046e696c887c4caec49fe9d7"} Feb 16 10:03:49 crc kubenswrapper[4814]: I0216 10:03:49.351228 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" Feb 16 10:03:49 crc kubenswrapper[4814]: I0216 10:03:49.377479 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" podStartSLOduration=5.191110208 podStartE2EDuration="1m4.377451635s" podCreationTimestamp="2026-02-16 10:02:45 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.966398179 +0000 UTC m=+1026.659554359" lastFinishedPulling="2026-02-16 10:03:48.152739596 +0000 UTC m=+1085.845895786" observedRunningTime="2026-02-16 10:03:49.367875165 +0000 UTC m=+1087.061031345" watchObservedRunningTime="2026-02-16 10:03:49.377451635 +0000 UTC m=+1087.070607815" Feb 16 10:03:51 crc kubenswrapper[4814]: I0216 10:03:51.370422 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" event={"ID":"1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a","Type":"ContainerStarted","Data":"b61553e0187124b0d880e62bc03a6e8bce5e3afd0e4b2f2ca86960b041c5cc52"} Feb 16 10:03:51 crc kubenswrapper[4814]: I0216 10:03:51.403338 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6lhcl" podStartSLOduration=4.140229722 podStartE2EDuration="1m5.403305857s" podCreationTimestamp="2026-02-16 10:02:46 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.962039605 +0000 UTC m=+1026.655195785" lastFinishedPulling="2026-02-16 10:03:50.22511574 +0000 UTC m=+1087.918271920" observedRunningTime="2026-02-16 10:03:51.397885525 +0000 UTC m=+1089.091041705" watchObservedRunningTime="2026-02-16 10:03:51.403305857 +0000 UTC m=+1089.096462027" Feb 16 10:03:52 crc kubenswrapper[4814]: I0216 10:03:52.382213 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" event={"ID":"c436a9b9-dacb-4c82-b799-117453b8c695","Type":"ContainerStarted","Data":"aa5b32f82898e51866f6504d810b4db3961dfa7e9da73b246a14ccd297eb5aa9"} Feb 16 10:03:52 crc kubenswrapper[4814]: I0216 10:03:52.383054 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" Feb 16 10:03:52 crc kubenswrapper[4814]: I0216 10:03:52.409197 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" podStartSLOduration=4.192772858 podStartE2EDuration="1m7.40917509s" podCreationTimestamp="2026-02-16 10:02:45 +0000 UTC" firstStartedPulling="2026-02-16 10:02:48.957668763 +0000 UTC m=+1026.650824943" lastFinishedPulling="2026-02-16 10:03:52.174070995 +0000 UTC m=+1089.867227175" observedRunningTime="2026-02-16 10:03:52.403349395 +0000 UTC m=+1090.096505575" watchObservedRunningTime="2026-02-16 10:03:52.40917509 +0000 UTC m=+1090.102331270" Feb 16 10:03:55 crc kubenswrapper[4814]: I0216 10:03:55.738448 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-qbxxf" Feb 16 10:03:55 crc kubenswrapper[4814]: I0216 10:03:55.876991 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-sl5wn" Feb 16 10:04:06 crc kubenswrapper[4814]: I0216 10:04:06.434941 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7787dfc59c-cx6k2" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.459254 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bd759bbbf-xj9pk"] Feb 16 10:04:24 crc kubenswrapper[4814]: E0216 10:04:24.460466 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7316f40f-3e42-4198-89d0-a702aedc3ddc" containerName="extract-content" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.460498 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7316f40f-3e42-4198-89d0-a702aedc3ddc" containerName="extract-content" Feb 16 10:04:24 crc kubenswrapper[4814]: E0216 10:04:24.460513 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7316f40f-3e42-4198-89d0-a702aedc3ddc" containerName="extract-utilities" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.460520 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7316f40f-3e42-4198-89d0-a702aedc3ddc" containerName="extract-utilities" Feb 16 10:04:24 crc kubenswrapper[4814]: E0216 10:04:24.460560 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7316f40f-3e42-4198-89d0-a702aedc3ddc" containerName="registry-server" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.460568 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7316f40f-3e42-4198-89d0-a702aedc3ddc" containerName="registry-server" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.460791 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="7316f40f-3e42-4198-89d0-a702aedc3ddc" containerName="registry-server" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.461906 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.466954 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.467134 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.467430 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.467706 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tcddg" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.472138 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bd759bbbf-xj9pk"] Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.554112 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-866784dbf-578xk"] Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.554895 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9glv\" (UniqueName: \"kubernetes.io/projected/906bee2c-78af-408e-9a66-693e9471cfa3-kube-api-access-c9glv\") pod \"dnsmasq-dns-5bd759bbbf-xj9pk\" (UID: \"906bee2c-78af-408e-9a66-693e9471cfa3\") " pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.555013 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906bee2c-78af-408e-9a66-693e9471cfa3-config\") pod \"dnsmasq-dns-5bd759bbbf-xj9pk\" (UID: \"906bee2c-78af-408e-9a66-693e9471cfa3\") " pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.555879 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.564014 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.566638 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-866784dbf-578xk"] Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.657045 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mnq\" (UniqueName: \"kubernetes.io/projected/7650f95b-47ff-4fc3-8f0f-55323141d2ed-kube-api-access-57mnq\") pod \"dnsmasq-dns-866784dbf-578xk\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.657315 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9glv\" (UniqueName: \"kubernetes.io/projected/906bee2c-78af-408e-9a66-693e9471cfa3-kube-api-access-c9glv\") pod \"dnsmasq-dns-5bd759bbbf-xj9pk\" (UID: \"906bee2c-78af-408e-9a66-693e9471cfa3\") " pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.657354 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-dns-svc\") pod \"dnsmasq-dns-866784dbf-578xk\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.657399 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-config\") pod \"dnsmasq-dns-866784dbf-578xk\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.657420 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906bee2c-78af-408e-9a66-693e9471cfa3-config\") pod \"dnsmasq-dns-5bd759bbbf-xj9pk\" (UID: \"906bee2c-78af-408e-9a66-693e9471cfa3\") " pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.658337 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906bee2c-78af-408e-9a66-693e9471cfa3-config\") pod \"dnsmasq-dns-5bd759bbbf-xj9pk\" (UID: \"906bee2c-78af-408e-9a66-693e9471cfa3\") " pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.680909 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9glv\" (UniqueName: \"kubernetes.io/projected/906bee2c-78af-408e-9a66-693e9471cfa3-kube-api-access-c9glv\") pod \"dnsmasq-dns-5bd759bbbf-xj9pk\" (UID: \"906bee2c-78af-408e-9a66-693e9471cfa3\") " pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.759517 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57mnq\" (UniqueName: \"kubernetes.io/projected/7650f95b-47ff-4fc3-8f0f-55323141d2ed-kube-api-access-57mnq\") pod \"dnsmasq-dns-866784dbf-578xk\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.759623 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-dns-svc\") pod \"dnsmasq-dns-866784dbf-578xk\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.759683 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-config\") pod \"dnsmasq-dns-866784dbf-578xk\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.760672 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-dns-svc\") pod \"dnsmasq-dns-866784dbf-578xk\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.760920 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-config\") pod \"dnsmasq-dns-866784dbf-578xk\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.778190 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57mnq\" (UniqueName: \"kubernetes.io/projected/7650f95b-47ff-4fc3-8f0f-55323141d2ed-kube-api-access-57mnq\") pod \"dnsmasq-dns-866784dbf-578xk\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.788383 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" Feb 16 10:04:24 crc kubenswrapper[4814]: I0216 10:04:24.880059 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:04:25 crc kubenswrapper[4814]: I0216 10:04:25.781729 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-866784dbf-578xk"] Feb 16 10:04:25 crc kubenswrapper[4814]: I0216 10:04:25.865013 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bd759bbbf-xj9pk"] Feb 16 10:04:25 crc kubenswrapper[4814]: W0216 10:04:25.871213 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906bee2c_78af_408e_9a66_693e9471cfa3.slice/crio-28caed88c1cf0a6f8987e1c9b5eb9bb08e68e8ccd42fda070a728a8a9a7e85b5 WatchSource:0}: Error finding container 28caed88c1cf0a6f8987e1c9b5eb9bb08e68e8ccd42fda070a728a8a9a7e85b5: Status 404 returned error can't find the container with id 28caed88c1cf0a6f8987e1c9b5eb9bb08e68e8ccd42fda070a728a8a9a7e85b5 Feb 16 10:04:26 crc kubenswrapper[4814]: I0216 10:04:26.737447 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" event={"ID":"906bee2c-78af-408e-9a66-693e9471cfa3","Type":"ContainerStarted","Data":"28caed88c1cf0a6f8987e1c9b5eb9bb08e68e8ccd42fda070a728a8a9a7e85b5"} Feb 16 10:04:26 crc kubenswrapper[4814]: I0216 10:04:26.739696 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866784dbf-578xk" event={"ID":"7650f95b-47ff-4fc3-8f0f-55323141d2ed","Type":"ContainerStarted","Data":"851176f30cc6d39ba724effbc011674b9df205be8209818ba9f0d93133a7b325"} Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.200179 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bd759bbbf-xj9pk"] Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.234667 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79ddd488bf-6cmnj"] Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.236082 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.254260 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79ddd488bf-6cmnj"] Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.388446 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnvgp\" (UniqueName: \"kubernetes.io/projected/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-kube-api-access-bnvgp\") pod \"dnsmasq-dns-79ddd488bf-6cmnj\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.388527 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-config\") pod \"dnsmasq-dns-79ddd488bf-6cmnj\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.388647 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-dns-svc\") pod \"dnsmasq-dns-79ddd488bf-6cmnj\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.489825 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnvgp\" (UniqueName: \"kubernetes.io/projected/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-kube-api-access-bnvgp\") pod \"dnsmasq-dns-79ddd488bf-6cmnj\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.489967 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-config\") pod \"dnsmasq-dns-79ddd488bf-6cmnj\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.490109 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-dns-svc\") pod \"dnsmasq-dns-79ddd488bf-6cmnj\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.491614 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-dns-svc\") pod \"dnsmasq-dns-79ddd488bf-6cmnj\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.491899 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-config\") pod \"dnsmasq-dns-79ddd488bf-6cmnj\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.530204 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnvgp\" (UniqueName: \"kubernetes.io/projected/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-kube-api-access-bnvgp\") pod \"dnsmasq-dns-79ddd488bf-6cmnj\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.565614 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.633911 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-866784dbf-578xk"] Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.663614 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dcdf6b57c-fplvz"] Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.669718 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.682869 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dcdf6b57c-fplvz"] Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.794772 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-config\") pod \"dnsmasq-dns-5dcdf6b57c-fplvz\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.795071 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8sz4\" (UniqueName: \"kubernetes.io/projected/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-kube-api-access-t8sz4\") pod \"dnsmasq-dns-5dcdf6b57c-fplvz\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.795146 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-dns-svc\") pod \"dnsmasq-dns-5dcdf6b57c-fplvz\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.900726 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8sz4\" (UniqueName: \"kubernetes.io/projected/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-kube-api-access-t8sz4\") pod \"dnsmasq-dns-5dcdf6b57c-fplvz\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.900837 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-dns-svc\") pod \"dnsmasq-dns-5dcdf6b57c-fplvz\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.900968 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-config\") pod \"dnsmasq-dns-5dcdf6b57c-fplvz\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.902335 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-dns-svc\") pod \"dnsmasq-dns-5dcdf6b57c-fplvz\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.902380 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-config\") pod \"dnsmasq-dns-5dcdf6b57c-fplvz\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:04:28 crc kubenswrapper[4814]: I0216 10:04:28.959015 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8sz4\" (UniqueName: \"kubernetes.io/projected/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-kube-api-access-t8sz4\") pod \"dnsmasq-dns-5dcdf6b57c-fplvz\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.057216 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.240871 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dcdf6b57c-fplvz"] Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.316430 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fc5599df7-j66xj"] Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.317754 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.362754 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fc5599df7-j66xj"] Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.417737 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-dns-svc\") pod \"dnsmasq-dns-6fc5599df7-j66xj\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.417844 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v27b8\" (UniqueName: \"kubernetes.io/projected/f1e95b34-31fc-417c-a131-22b46dd4ede5-kube-api-access-v27b8\") pod \"dnsmasq-dns-6fc5599df7-j66xj\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.417958 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-config\") pod \"dnsmasq-dns-6fc5599df7-j66xj\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.519399 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-config\") pod \"dnsmasq-dns-6fc5599df7-j66xj\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.519483 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-dns-svc\") pod \"dnsmasq-dns-6fc5599df7-j66xj\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.519519 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v27b8\" (UniqueName: \"kubernetes.io/projected/f1e95b34-31fc-417c-a131-22b46dd4ede5-kube-api-access-v27b8\") pod \"dnsmasq-dns-6fc5599df7-j66xj\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.521571 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-config\") pod \"dnsmasq-dns-6fc5599df7-j66xj\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.522132 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-dns-svc\") pod \"dnsmasq-dns-6fc5599df7-j66xj\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.585942 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v27b8\" (UniqueName: \"kubernetes.io/projected/f1e95b34-31fc-417c-a131-22b46dd4ede5-kube-api-access-v27b8\") pod \"dnsmasq-dns-6fc5599df7-j66xj\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.676123 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.768672 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.792519 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.796147 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.796379 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.796581 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.796742 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-7hhzm" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.796876 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.797109 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.804128 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.820358 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.942597 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.948529 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.950507 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.951386 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.951601 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.951979 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.952128 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.953475 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.953521 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.953592 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlmg6\" (UniqueName: \"kubernetes.io/projected/6a0b4bfb-2144-4fd9-be15-07396c44a11c-kube-api-access-jlmg6\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.953625 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a0b4bfb-2144-4fd9-be15-07396c44a11c-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.953645 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a0b4bfb-2144-4fd9-be15-07396c44a11c-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.953670 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a0b4bfb-2144-4fd9-be15-07396c44a11c-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.953701 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.953742 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.953789 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.953817 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a0b4bfb-2144-4fd9-be15-07396c44a11c-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.953869 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a0b4bfb-2144-4fd9-be15-07396c44a11c-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.954391 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qpcp9" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.954804 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 10:04:29 crc kubenswrapper[4814]: I0216 10:04:29.955914 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059461 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059517 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4e759af-f091-47c0-accc-c68b45b277fa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059569 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059604 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scn7d\" (UniqueName: \"kubernetes.io/projected/b4e759af-f091-47c0-accc-c68b45b277fa-kube-api-access-scn7d\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059643 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a0b4bfb-2144-4fd9-be15-07396c44a11c-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059735 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4e759af-f091-47c0-accc-c68b45b277fa-config-data\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059761 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4e759af-f091-47c0-accc-c68b45b277fa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059834 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059860 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a0b4bfb-2144-4fd9-be15-07396c44a11c-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059892 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4e759af-f091-47c0-accc-c68b45b277fa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059928 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059966 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.059988 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4e759af-f091-47c0-accc-c68b45b277fa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.060032 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlmg6\" (UniqueName: \"kubernetes.io/projected/6a0b4bfb-2144-4fd9-be15-07396c44a11c-kube-api-access-jlmg6\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.060059 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.060088 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a0b4bfb-2144-4fd9-be15-07396c44a11c-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.060112 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a0b4bfb-2144-4fd9-be15-07396c44a11c-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.060135 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a0b4bfb-2144-4fd9-be15-07396c44a11c-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.060154 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.060194 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.060224 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.060269 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.062509 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a0b4bfb-2144-4fd9-be15-07396c44a11c-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.062864 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.063875 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a0b4bfb-2144-4fd9-be15-07396c44a11c-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.065527 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a0b4bfb-2144-4fd9-be15-07396c44a11c-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.065815 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.070660 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.079025 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a0b4bfb-2144-4fd9-be15-07396c44a11c-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.082308 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.083659 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a0b4bfb-2144-4fd9-be15-07396c44a11c-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.100298 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a0b4bfb-2144-4fd9-be15-07396c44a11c-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.105718 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlmg6\" (UniqueName: \"kubernetes.io/projected/6a0b4bfb-2144-4fd9-be15-07396c44a11c-kube-api-access-jlmg6\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.136831 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"6a0b4bfb-2144-4fd9-be15-07396c44a11c\") " pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.173405 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.173501 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4e759af-f091-47c0-accc-c68b45b277fa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.173570 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.173589 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scn7d\" (UniqueName: \"kubernetes.io/projected/b4e759af-f091-47c0-accc-c68b45b277fa-kube-api-access-scn7d\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.173653 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4e759af-f091-47c0-accc-c68b45b277fa-config-data\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.173673 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4e759af-f091-47c0-accc-c68b45b277fa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.173702 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.173735 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4e759af-f091-47c0-accc-c68b45b277fa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.173811 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4e759af-f091-47c0-accc-c68b45b277fa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.173839 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.173893 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.174426 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.174475 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.174691 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.175838 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b4e759af-f091-47c0-accc-c68b45b277fa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.180522 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b4e759af-f091-47c0-accc-c68b45b277fa-config-data\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.183512 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.184168 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b4e759af-f091-47c0-accc-c68b45b277fa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.184862 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b4e759af-f091-47c0-accc-c68b45b277fa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.185950 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b4e759af-f091-47c0-accc-c68b45b277fa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.190913 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b4e759af-f091-47c0-accc-c68b45b277fa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.215993 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scn7d\" (UniqueName: \"kubernetes.io/projected/b4e759af-f091-47c0-accc-c68b45b277fa-kube-api-access-scn7d\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.249167 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"b4e759af-f091-47c0-accc-c68b45b277fa\") " pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.297598 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.308187 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.394220 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79ddd488bf-6cmnj"] Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.523267 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fc5599df7-j66xj"] Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.546699 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dcdf6b57c-fplvz"] Feb 16 10:04:30 crc kubenswrapper[4814]: W0216 10:04:30.613182 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd9d10a8_ee0b_41f1_af0b_04b9d23a8754.slice/crio-ff0fb2847426f839e1998ae927c762878cb5682f5c97de8a264741b5c4351036 WatchSource:0}: Error finding container ff0fb2847426f839e1998ae927c762878cb5682f5c97de8a264741b5c4351036: Status 404 returned error can't find the container with id ff0fb2847426f839e1998ae927c762878cb5682f5c97de8a264741b5c4351036 Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.783131 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.785658 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.792150 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.792615 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.792876 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.792898 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.794281 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jh4lz" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.794526 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.801210 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.809150 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.856388 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" event={"ID":"f1e95b34-31fc-417c-a131-22b46dd4ede5","Type":"ContainerStarted","Data":"a0e0c9887572af18f874838e9587333649fe20eaa059c0f5832bc3ab979e3789"} Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.858691 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" event={"ID":"d034d4b0-fd39-4862-bfa9-103f3a8da5dc","Type":"ContainerStarted","Data":"9babd8a8e848ec63058fc8aff1b3a24b866b38b25a290e07c1704fc06f6240b7"} Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.861149 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" event={"ID":"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754","Type":"ContainerStarted","Data":"ff0fb2847426f839e1998ae927c762878cb5682f5c97de8a264741b5c4351036"} Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.911944 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.912014 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.912045 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.912071 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7vbb\" (UniqueName: \"kubernetes.io/projected/19661670-37f9-4577-93d4-cd87303f3008-kube-api-access-t7vbb\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.912105 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19661670-37f9-4577-93d4-cd87303f3008-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.912131 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19661670-37f9-4577-93d4-cd87303f3008-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.912184 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/19661670-37f9-4577-93d4-cd87303f3008-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.912209 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19661670-37f9-4577-93d4-cd87303f3008-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.912260 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.912286 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:30 crc kubenswrapper[4814]: I0216 10:04:30.912310 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19661670-37f9-4577-93d4-cd87303f3008-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.014310 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.014393 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7vbb\" (UniqueName: \"kubernetes.io/projected/19661670-37f9-4577-93d4-cd87303f3008-kube-api-access-t7vbb\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.014453 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19661670-37f9-4577-93d4-cd87303f3008-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.014479 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19661670-37f9-4577-93d4-cd87303f3008-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.014607 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/19661670-37f9-4577-93d4-cd87303f3008-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.014664 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19661670-37f9-4577-93d4-cd87303f3008-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.014742 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.014763 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.014816 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19661670-37f9-4577-93d4-cd87303f3008-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.014907 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.014984 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.015840 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.016144 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19661670-37f9-4577-93d4-cd87303f3008-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.016396 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.016834 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19661670-37f9-4577-93d4-cd87303f3008-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.017402 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/19661670-37f9-4577-93d4-cd87303f3008-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.017986 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.022402 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.025107 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19661670-37f9-4577-93d4-cd87303f3008-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.038804 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19661670-37f9-4577-93d4-cd87303f3008-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.041962 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19661670-37f9-4577-93d4-cd87303f3008-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.043185 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7vbb\" (UniqueName: \"kubernetes.io/projected/19661670-37f9-4577-93d4-cd87303f3008-kube-api-access-t7vbb\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.043629 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19661670-37f9-4577-93d4-cd87303f3008\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.130226 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.205903 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.268370 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.272859 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.277735 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-h9slc" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.278668 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.279613 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.279797 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.290755 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.296243 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.308985 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.423115 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.423573 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.423624 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.423664 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.423685 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.423725 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.423763 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.423786 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8rk\" (UniqueName: \"kubernetes.io/projected/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-kube-api-access-gn8rk\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.530338 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.532378 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.532467 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.532583 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.532604 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.532687 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.532908 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.532936 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8rk\" (UniqueName: \"kubernetes.io/projected/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-kube-api-access-gn8rk\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.535144 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-config-data-default\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.535641 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.538823 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.540841 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.541806 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.543446 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.569567 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-kolla-config\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.589851 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8rk\" (UniqueName: \"kubernetes.io/projected/43c73c4c-5cdf-4b6d-93b0-afeb459b74c1-kube-api-access-gn8rk\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.589959 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1\") " pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.630626 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.892178 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"6a0b4bfb-2144-4fd9-be15-07396c44a11c","Type":"ContainerStarted","Data":"216545dd4fbcf673660ae2dcddd901639bdee4cfa952caf2321355d93c8660a7"} Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.896270 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b4e759af-f091-47c0-accc-c68b45b277fa","Type":"ContainerStarted","Data":"135ba11aea4ccd9b67db55821e57d335b9499bc490fa4a29b5f3ccbd1f03bb8b"} Feb 16 10:04:31 crc kubenswrapper[4814]: I0216 10:04:31.949655 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.235892 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 10:04:32 crc kubenswrapper[4814]: W0216 10:04:32.239354 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c73c4c_5cdf_4b6d_93b0_afeb459b74c1.slice/crio-6e22a93df689778f435cd1825db5552445cd8176d8e3592f1ebc710dcd676a19 WatchSource:0}: Error finding container 6e22a93df689778f435cd1825db5552445cd8176d8e3592f1ebc710dcd676a19: Status 404 returned error can't find the container with id 6e22a93df689778f435cd1825db5552445cd8176d8e3592f1ebc710dcd676a19 Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.581809 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.590141 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.594276 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-8l4fc" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.595214 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.595517 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.595927 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.627381 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.800557 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54151705-0c05-4e03-99d4-9dc9d4a37de7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.800661 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/54151705-0c05-4e03-99d4-9dc9d4a37de7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.800712 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/54151705-0c05-4e03-99d4-9dc9d4a37de7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.800731 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/54151705-0c05-4e03-99d4-9dc9d4a37de7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.800759 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl6rv\" (UniqueName: \"kubernetes.io/projected/54151705-0c05-4e03-99d4-9dc9d4a37de7-kube-api-access-kl6rv\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.800791 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.800812 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/54151705-0c05-4e03-99d4-9dc9d4a37de7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.800830 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54151705-0c05-4e03-99d4-9dc9d4a37de7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.817379 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.819118 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.825255 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.825579 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.827498 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rp76p" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.828892 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920205 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920262 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/54151705-0c05-4e03-99d4-9dc9d4a37de7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920295 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k6cg\" (UniqueName: \"kubernetes.io/projected/5fdd7785-aaf8-4454-b063-9723065293b7-kube-api-access-6k6cg\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920324 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54151705-0c05-4e03-99d4-9dc9d4a37de7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920355 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5fdd7785-aaf8-4454-b063-9723065293b7-kolla-config\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920386 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54151705-0c05-4e03-99d4-9dc9d4a37de7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920426 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fdd7785-aaf8-4454-b063-9723065293b7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920449 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/54151705-0c05-4e03-99d4-9dc9d4a37de7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920466 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fdd7785-aaf8-4454-b063-9723065293b7-config-data\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920497 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdd7785-aaf8-4454-b063-9723065293b7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920525 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/54151705-0c05-4e03-99d4-9dc9d4a37de7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920561 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/54151705-0c05-4e03-99d4-9dc9d4a37de7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.920589 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl6rv\" (UniqueName: \"kubernetes.io/projected/54151705-0c05-4e03-99d4-9dc9d4a37de7-kube-api-access-kl6rv\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.921317 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.923260 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/54151705-0c05-4e03-99d4-9dc9d4a37de7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.925772 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54151705-0c05-4e03-99d4-9dc9d4a37de7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.935585 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1","Type":"ContainerStarted","Data":"6e22a93df689778f435cd1825db5552445cd8176d8e3592f1ebc710dcd676a19"} Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.937195 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/54151705-0c05-4e03-99d4-9dc9d4a37de7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.937710 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/54151705-0c05-4e03-99d4-9dc9d4a37de7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.941241 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54151705-0c05-4e03-99d4-9dc9d4a37de7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.942462 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/54151705-0c05-4e03-99d4-9dc9d4a37de7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.944233 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl6rv\" (UniqueName: \"kubernetes.io/projected/54151705-0c05-4e03-99d4-9dc9d4a37de7-kube-api-access-kl6rv\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.944988 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19661670-37f9-4577-93d4-cd87303f3008","Type":"ContainerStarted","Data":"661089692f7db8e7977b43c62dd88ee72c1f6f51e84bcce23177b362ba51f26f"} Feb 16 10:04:32 crc kubenswrapper[4814]: I0216 10:04:32.974886 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"54151705-0c05-4e03-99d4-9dc9d4a37de7\") " pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.039800 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k6cg\" (UniqueName: \"kubernetes.io/projected/5fdd7785-aaf8-4454-b063-9723065293b7-kube-api-access-6k6cg\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.039921 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5fdd7785-aaf8-4454-b063-9723065293b7-kolla-config\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.041223 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fdd7785-aaf8-4454-b063-9723065293b7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.041274 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fdd7785-aaf8-4454-b063-9723065293b7-config-data\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.041376 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdd7785-aaf8-4454-b063-9723065293b7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.044825 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5fdd7785-aaf8-4454-b063-9723065293b7-kolla-config\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.053900 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fdd7785-aaf8-4454-b063-9723065293b7-config-data\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.073105 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fdd7785-aaf8-4454-b063-9723065293b7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.111517 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fdd7785-aaf8-4454-b063-9723065293b7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.115400 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k6cg\" (UniqueName: \"kubernetes.io/projected/5fdd7785-aaf8-4454-b063-9723065293b7-kube-api-access-6k6cg\") pod \"memcached-0\" (UID: \"5fdd7785-aaf8-4454-b063-9723065293b7\") " pod="openstack/memcached-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.160051 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 10:04:33 crc kubenswrapper[4814]: I0216 10:04:33.246666 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 10:04:34 crc kubenswrapper[4814]: I0216 10:04:34.173897 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 10:04:34 crc kubenswrapper[4814]: W0216 10:04:34.302610 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fdd7785_aaf8_4454_b063_9723065293b7.slice/crio-9706407db8345ad0ab598fecbbda36c28ca0fda5b0dfbd51f6f3d964bc7a71df WatchSource:0}: Error finding container 9706407db8345ad0ab598fecbbda36c28ca0fda5b0dfbd51f6f3d964bc7a71df: Status 404 returned error can't find the container with id 9706407db8345ad0ab598fecbbda36c28ca0fda5b0dfbd51f6f3d964bc7a71df Feb 16 10:04:34 crc kubenswrapper[4814]: I0216 10:04:34.598244 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 10:04:35 crc kubenswrapper[4814]: I0216 10:04:35.034219 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5fdd7785-aaf8-4454-b063-9723065293b7","Type":"ContainerStarted","Data":"9706407db8345ad0ab598fecbbda36c28ca0fda5b0dfbd51f6f3d964bc7a71df"} Feb 16 10:04:35 crc kubenswrapper[4814]: I0216 10:04:35.034278 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"54151705-0c05-4e03-99d4-9dc9d4a37de7","Type":"ContainerStarted","Data":"784c10ac254e2c7b4c3e2e077e618d13a8fafdffc91182514efc3ece677ec672"} Feb 16 10:04:35 crc kubenswrapper[4814]: I0216 10:04:35.606431 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 10:04:35 crc kubenswrapper[4814]: I0216 10:04:35.607876 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 10:04:35 crc kubenswrapper[4814]: I0216 10:04:35.613562 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-lqrb5" Feb 16 10:04:35 crc kubenswrapper[4814]: I0216 10:04:35.625065 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 10:04:35 crc kubenswrapper[4814]: I0216 10:04:35.722160 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k4wf\" (UniqueName: \"kubernetes.io/projected/55140aa6-2437-463c-be2e-0fa6735ee321-kube-api-access-5k4wf\") pod \"kube-state-metrics-0\" (UID: \"55140aa6-2437-463c-be2e-0fa6735ee321\") " pod="openstack/kube-state-metrics-0" Feb 16 10:04:35 crc kubenswrapper[4814]: I0216 10:04:35.824033 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k4wf\" (UniqueName: \"kubernetes.io/projected/55140aa6-2437-463c-be2e-0fa6735ee321-kube-api-access-5k4wf\") pod \"kube-state-metrics-0\" (UID: \"55140aa6-2437-463c-be2e-0fa6735ee321\") " pod="openstack/kube-state-metrics-0" Feb 16 10:04:36 crc kubenswrapper[4814]: I0216 10:04:35.981669 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k4wf\" (UniqueName: \"kubernetes.io/projected/55140aa6-2437-463c-be2e-0fa6735ee321-kube-api-access-5k4wf\") pod \"kube-state-metrics-0\" (UID: \"55140aa6-2437-463c-be2e-0fa6735ee321\") " pod="openstack/kube-state-metrics-0" Feb 16 10:04:36 crc kubenswrapper[4814]: I0216 10:04:36.001100 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.157784 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.164785 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.177300 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.177420 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-qbpqm" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.177715 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.177858 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.178020 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.178143 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.178285 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.178502 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.188116 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.268787 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-config\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.268839 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.268885 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.269269 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.269423 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.269581 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6vtp\" (UniqueName: \"kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-kube-api-access-f6vtp\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.269634 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.269672 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.269833 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.269904 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9320085e-0598-4822-aa1d-5b2f9469f573-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.346252 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.372304 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.372389 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.372426 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6vtp\" (UniqueName: \"kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-kube-api-access-f6vtp\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.372452 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.372484 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.372558 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.372588 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9320085e-0598-4822-aa1d-5b2f9469f573-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.372641 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-config\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.372669 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.372704 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.383930 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.384352 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.385097 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.388244 4814 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.388283 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/90bf6676d2b1c4d0c7b45da57bbcb46d490752accd713708e5a50469d2e9677d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.394576 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.414313 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9320085e-0598-4822-aa1d-5b2f9469f573-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.414944 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6vtp\" (UniqueName: \"kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-kube-api-access-f6vtp\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.418012 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-config\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.437895 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.439521 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.468950 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\") pod \"prometheus-metric-storage-0\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:37 crc kubenswrapper[4814]: I0216 10:04:37.530693 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.145733 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.149014 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.156594 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.156823 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xj77v" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.157931 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.158377 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.158932 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.160490 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.254202 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.254567 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.254741 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsjfx\" (UniqueName: \"kubernetes.io/projected/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-kube-api-access-xsjfx\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.254816 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.254927 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.255000 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.255108 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-config\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.255228 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.356908 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.356991 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.357036 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.357085 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsjfx\" (UniqueName: \"kubernetes.io/projected/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-kube-api-access-xsjfx\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.357110 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.357129 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.357147 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.357222 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-config\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.358212 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-config\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.360293 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.361683 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.361993 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.365194 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.380476 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.389989 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsjfx\" (UniqueName: \"kubernetes.io/projected/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-kube-api-access-xsjfx\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.390338 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/687aef9d-288e-47b4-9f5f-1ec1bd5b17f9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.401961 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9\") " pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:38 crc kubenswrapper[4814]: I0216 10:04:38.508944 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.609147 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dc2nv"] Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.613420 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.618746 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.618923 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.620121 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dc2nv"] Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.633348 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-v6xwq"] Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.635573 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.636479 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-sx5lj" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.699443 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-v6xwq"] Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710588 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-var-log\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710646 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wskzm\" (UniqueName: \"kubernetes.io/projected/51879c30-795f-4f27-8018-fdafbafd8a4d-kube-api-access-wskzm\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710678 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de6150f-ee9f-437c-8813-4255d2533e45-ovn-controller-tls-certs\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710725 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7de6150f-ee9f-437c-8813-4255d2533e45-scripts\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710766 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7de6150f-ee9f-437c-8813-4255d2533e45-var-log-ovn\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710792 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-etc-ovs\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710814 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7de6150f-ee9f-437c-8813-4255d2533e45-var-run\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710849 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-var-run\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710886 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24lw\" (UniqueName: \"kubernetes.io/projected/7de6150f-ee9f-437c-8813-4255d2533e45-kube-api-access-m24lw\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710916 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51879c30-795f-4f27-8018-fdafbafd8a4d-scripts\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710934 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de6150f-ee9f-437c-8813-4255d2533e45-combined-ca-bundle\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710961 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7de6150f-ee9f-437c-8813-4255d2533e45-var-run-ovn\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.710982 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-var-lib\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812473 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7de6150f-ee9f-437c-8813-4255d2533e45-var-log-ovn\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812560 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-etc-ovs\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812588 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7de6150f-ee9f-437c-8813-4255d2533e45-var-run\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812622 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-var-run\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812665 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m24lw\" (UniqueName: \"kubernetes.io/projected/7de6150f-ee9f-437c-8813-4255d2533e45-kube-api-access-m24lw\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812697 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51879c30-795f-4f27-8018-fdafbafd8a4d-scripts\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812720 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de6150f-ee9f-437c-8813-4255d2533e45-combined-ca-bundle\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812752 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7de6150f-ee9f-437c-8813-4255d2533e45-var-run-ovn\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812770 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-var-lib\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812803 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-var-log\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812827 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wskzm\" (UniqueName: \"kubernetes.io/projected/51879c30-795f-4f27-8018-fdafbafd8a4d-kube-api-access-wskzm\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812854 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de6150f-ee9f-437c-8813-4255d2533e45-ovn-controller-tls-certs\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.812899 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7de6150f-ee9f-437c-8813-4255d2533e45-scripts\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.820989 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7de6150f-ee9f-437c-8813-4255d2533e45-scripts\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.821211 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-etc-ovs\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.831035 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7de6150f-ee9f-437c-8813-4255d2533e45-var-log-ovn\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.831131 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7de6150f-ee9f-437c-8813-4255d2533e45-var-run\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.846001 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-var-run\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.846075 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-var-lib\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.846142 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7de6150f-ee9f-437c-8813-4255d2533e45-var-run-ovn\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.864313 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wskzm\" (UniqueName: \"kubernetes.io/projected/51879c30-795f-4f27-8018-fdafbafd8a4d-kube-api-access-wskzm\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.864917 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m24lw\" (UniqueName: \"kubernetes.io/projected/7de6150f-ee9f-437c-8813-4255d2533e45-kube-api-access-m24lw\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.875161 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/51879c30-795f-4f27-8018-fdafbafd8a4d-var-log\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.878517 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de6150f-ee9f-437c-8813-4255d2533e45-ovn-controller-tls-certs\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.901431 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de6150f-ee9f-437c-8813-4255d2533e45-combined-ca-bundle\") pod \"ovn-controller-dc2nv\" (UID: \"7de6150f-ee9f-437c-8813-4255d2533e45\") " pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.903887 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51879c30-795f-4f27-8018-fdafbafd8a4d-scripts\") pod \"ovn-controller-ovs-v6xwq\" (UID: \"51879c30-795f-4f27-8018-fdafbafd8a4d\") " pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.965713 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc2nv" Feb 16 10:04:39 crc kubenswrapper[4814]: I0216 10:04:39.997235 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.226609 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.275632 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.293134 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-sg9h6" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.293943 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.294107 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.298488 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.332440 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.397395 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46r4h\" (UniqueName: \"kubernetes.io/projected/6eec7640-cb34-4716-90e6-36e4ba140f8f-kube-api-access-46r4h\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.397486 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eec7640-cb34-4716-90e6-36e4ba140f8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.397567 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec7640-cb34-4716-90e6-36e4ba140f8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.397598 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6eec7640-cb34-4716-90e6-36e4ba140f8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.397646 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.397686 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec7640-cb34-4716-90e6-36e4ba140f8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.397728 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eec7640-cb34-4716-90e6-36e4ba140f8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.397746 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6eec7640-cb34-4716-90e6-36e4ba140f8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.499789 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46r4h\" (UniqueName: \"kubernetes.io/projected/6eec7640-cb34-4716-90e6-36e4ba140f8f-kube-api-access-46r4h\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.499869 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eec7640-cb34-4716-90e6-36e4ba140f8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.499921 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec7640-cb34-4716-90e6-36e4ba140f8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.500782 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6eec7640-cb34-4716-90e6-36e4ba140f8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.500879 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.500945 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec7640-cb34-4716-90e6-36e4ba140f8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.501032 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eec7640-cb34-4716-90e6-36e4ba140f8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.501058 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6eec7640-cb34-4716-90e6-36e4ba140f8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.502612 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6eec7640-cb34-4716-90e6-36e4ba140f8f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.503197 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eec7640-cb34-4716-90e6-36e4ba140f8f-config\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.501576 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6eec7640-cb34-4716-90e6-36e4ba140f8f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.504466 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.519872 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eec7640-cb34-4716-90e6-36e4ba140f8f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.526891 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eec7640-cb34-4716-90e6-36e4ba140f8f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.528165 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46r4h\" (UniqueName: \"kubernetes.io/projected/6eec7640-cb34-4716-90e6-36e4ba140f8f-kube-api-access-46r4h\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.537969 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6eec7640-cb34-4716-90e6-36e4ba140f8f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.589182 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6eec7640-cb34-4716-90e6-36e4ba140f8f\") " pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:43 crc kubenswrapper[4814]: I0216 10:04:43.624567 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 10:04:58 crc kubenswrapper[4814]: W0216 10:04:58.317226 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55140aa6_2437_463c_be2e_0fa6735ee321.slice/crio-9651604a7cebb86e78d3be91b02531db2fe24c5754e4b8e0e3f03d647cc5197b WatchSource:0}: Error finding container 9651604a7cebb86e78d3be91b02531db2fe24c5754e4b8e0e3f03d647cc5197b: Status 404 returned error can't find the container with id 9651604a7cebb86e78d3be91b02531db2fe24c5754e4b8e0e3f03d647cc5197b Feb 16 10:04:58 crc kubenswrapper[4814]: I0216 10:04:58.596798 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"55140aa6-2437-463c-be2e-0fa6735ee321","Type":"ContainerStarted","Data":"9651604a7cebb86e78d3be91b02531db2fe24c5754e4b8e0e3f03d647cc5197b"} Feb 16 10:04:59 crc kubenswrapper[4814]: I0216 10:04:59.859382 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.266897 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.267902 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.268114 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jlmg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod notifications-rabbitmq-server-0_openstack(6a0b4bfb-2144-4fd9-be15-07396c44a11c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.269396 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/notifications-rabbitmq-server-0" podUID="6a0b4bfb-2144-4fd9-be15-07396c44a11c" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.655271 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/notifications-rabbitmq-server-0" podUID="6a0b4bfb-2144-4fd9-be15-07396c44a11c" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.746886 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-memcached:watcher_latest" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.746978 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-memcached:watcher_latest" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.747239 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:38.102.83.164:5001/podified-master-centos10/openstack-memcached:watcher_latest,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n7dh68fhbhcdh577h5fh657h54h87h68dh87h5d9h58dh59ch5f6h588h5f8h65fh5f4h58ch9ch564h7dhf4h685h5b9h9hfch694h9hd8h67dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6k6cg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(5fdd7785-aaf8-4454-b063-9723065293b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.748480 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="5fdd7785-aaf8-4454-b063-9723065293b7" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.767950 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.768042 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.768283 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7vbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(19661670-37f9-4577-93d4-cd87303f3008): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.769495 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="19661670-37f9-4577-93d4-cd87303f3008" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.773991 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.774066 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.774290 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scn7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(b4e759af-f091-47c0-accc-c68b45b277fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:05:03 crc kubenswrapper[4814]: E0216 10:05:03.775491 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="b4e759af-f091-47c0-accc-c68b45b277fa" Feb 16 10:05:04 crc kubenswrapper[4814]: E0216 10:05:04.662845 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="19661670-37f9-4577-93d4-cd87303f3008" Feb 16 10:05:04 crc kubenswrapper[4814]: E0216 10:05:04.670818 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-memcached:watcher_latest\\\"\"" pod="openstack/memcached-0" podUID="5fdd7785-aaf8-4454-b063-9723065293b7" Feb 16 10:05:04 crc kubenswrapper[4814]: E0216 10:05:04.683432 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-server-0" podUID="b4e759af-f091-47c0-accc-c68b45b277fa" Feb 16 10:05:05 crc kubenswrapper[4814]: W0216 10:05:05.326078 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod687aef9d_288e_47b4_9f5f_1ec1bd5b17f9.slice/crio-90fd9f2566c361df5b802b67d8d7a1d99abdc463961b79cb9b488288077c9615 WatchSource:0}: Error finding container 90fd9f2566c361df5b802b67d8d7a1d99abdc463961b79cb9b488288077c9615: Status 404 returned error can't find the container with id 90fd9f2566c361df5b802b67d8d7a1d99abdc463961b79cb9b488288077c9615 Feb 16 10:05:05 crc kubenswrapper[4814]: E0216 10:05:05.358573 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Feb 16 10:05:05 crc kubenswrapper[4814]: E0216 10:05:05.358939 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Feb 16 10:05:05 crc kubenswrapper[4814]: E0216 10:05:05.359104 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.164:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kl6rv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(54151705-0c05-4e03-99d4-9dc9d4a37de7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:05:05 crc kubenswrapper[4814]: E0216 10:05:05.360338 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="54151705-0c05-4e03-99d4-9dc9d4a37de7" Feb 16 10:05:05 crc kubenswrapper[4814]: E0216 10:05:05.384052 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Feb 16 10:05:05 crc kubenswrapper[4814]: E0216 10:05:05.384127 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Feb 16 10:05:05 crc kubenswrapper[4814]: E0216 10:05:05.384286 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.164:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gn8rk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(43c73c4c-5cdf-4b6d-93b0-afeb459b74c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:05:05 crc kubenswrapper[4814]: E0216 10:05:05.385541 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="43c73c4c-5cdf-4b6d-93b0-afeb459b74c1" Feb 16 10:05:05 crc kubenswrapper[4814]: I0216 10:05:05.674039 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9","Type":"ContainerStarted","Data":"90fd9f2566c361df5b802b67d8d7a1d99abdc463961b79cb9b488288077c9615"} Feb 16 10:05:05 crc kubenswrapper[4814]: E0216 10:05:05.677626 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-galera-0" podUID="43c73c4c-5cdf-4b6d-93b0-afeb459b74c1" Feb 16 10:05:05 crc kubenswrapper[4814]: E0216 10:05:05.694323 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="54151705-0c05-4e03-99d4-9dc9d4a37de7" Feb 16 10:05:10 crc kubenswrapper[4814]: E0216 10:05:10.925179 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 10:05:10 crc kubenswrapper[4814]: E0216 10:05:10.925729 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 10:05:10 crc kubenswrapper[4814]: E0216 10:05:10.925911 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9glv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5bd759bbbf-xj9pk_openstack(906bee2c-78af-408e-9a66-693e9471cfa3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:05:10 crc kubenswrapper[4814]: E0216 10:05:10.927134 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" podUID="906bee2c-78af-408e-9a66-693e9471cfa3" Feb 16 10:05:10 crc kubenswrapper[4814]: E0216 10:05:10.929829 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 10:05:10 crc kubenswrapper[4814]: E0216 10:05:10.929896 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 10:05:10 crc kubenswrapper[4814]: E0216 10:05:10.930162 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57mnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-866784dbf-578xk_openstack(7650f95b-47ff-4fc3-8f0f-55323141d2ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:05:10 crc kubenswrapper[4814]: E0216 10:05:10.932512 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-866784dbf-578xk" podUID="7650f95b-47ff-4fc3-8f0f-55323141d2ed" Feb 16 10:05:11 crc kubenswrapper[4814]: E0216 10:05:11.090786 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 10:05:11 crc kubenswrapper[4814]: E0216 10:05:11.090868 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 10:05:11 crc kubenswrapper[4814]: E0216 10:05:11.091032 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n658h5c5h88h68dhb6h57dhd4h697hb8h8fh74hb7h54fh54dh548h7h55dhb8h9fh55dh688h5bbh5d5h675h669hb7h67hbbhffh668h5c7hc5q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8sz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5dcdf6b57c-fplvz_openstack(dd9d10a8-ee0b-41f1-af0b-04b9d23a8754): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:05:11 crc kubenswrapper[4814]: E0216 10:05:11.092309 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" podUID="dd9d10a8-ee0b-41f1-af0b-04b9d23a8754" Feb 16 10:05:11 crc kubenswrapper[4814]: I0216 10:05:11.242877 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dc2nv"] Feb 16 10:05:11 crc kubenswrapper[4814]: I0216 10:05:11.250509 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 10:05:11 crc kubenswrapper[4814]: W0216 10:05:11.321746 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9320085e_0598_4822_aa1d_5b2f9469f573.slice/crio-c385e7263af8b21810c55f084f19c91629f2d2592bc42b93cf53e48dbafda933 WatchSource:0}: Error finding container c385e7263af8b21810c55f084f19c91629f2d2592bc42b93cf53e48dbafda933: Status 404 returned error can't find the container with id c385e7263af8b21810c55f084f19c91629f2d2592bc42b93cf53e48dbafda933 Feb 16 10:05:11 crc kubenswrapper[4814]: I0216 10:05:11.424564 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 10:05:11 crc kubenswrapper[4814]: I0216 10:05:11.591107 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-v6xwq"] Feb 16 10:05:11 crc kubenswrapper[4814]: W0216 10:05:11.671124 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51879c30_795f_4f27_8018_fdafbafd8a4d.slice/crio-9935e9017a8433d165a7acb00287229ed0d2b0a45d3dfe013e338a04b0094854 WatchSource:0}: Error finding container 9935e9017a8433d165a7acb00287229ed0d2b0a45d3dfe013e338a04b0094854: Status 404 returned error can't find the container with id 9935e9017a8433d165a7acb00287229ed0d2b0a45d3dfe013e338a04b0094854 Feb 16 10:05:11 crc kubenswrapper[4814]: W0216 10:05:11.672968 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eec7640_cb34_4716_90e6_36e4ba140f8f.slice/crio-7e26fae1c0ff163136164077af6081c5d138a0882a93f5027ce5f5a59ff17c94 WatchSource:0}: Error finding container 7e26fae1c0ff163136164077af6081c5d138a0882a93f5027ce5f5a59ff17c94: Status 404 returned error can't find the container with id 7e26fae1c0ff163136164077af6081c5d138a0882a93f5027ce5f5a59ff17c94 Feb 16 10:05:11 crc kubenswrapper[4814]: E0216 10:05:11.725779 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 10:05:11 crc kubenswrapper[4814]: E0216 10:05:11.725864 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 10:05:11 crc kubenswrapper[4814]: E0216 10:05:11.726102 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n684h65fh56h6fh87h85h57h76h5b7h94hffh649hfbh8ch5bch56fh5c5hbh86hf9h99h5dch95h66hd5h555h566h646h546h79h9dh55dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnvgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-79ddd488bf-6cmnj_openstack(d034d4b0-fd39-4862-bfa9-103f3a8da5dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:05:11 crc kubenswrapper[4814]: E0216 10:05:11.727624 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" podUID="d034d4b0-fd39-4862-bfa9-103f3a8da5dc" Feb 16 10:05:11 crc kubenswrapper[4814]: I0216 10:05:11.736684 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9320085e-0598-4822-aa1d-5b2f9469f573","Type":"ContainerStarted","Data":"c385e7263af8b21810c55f084f19c91629f2d2592bc42b93cf53e48dbafda933"} Feb 16 10:05:11 crc kubenswrapper[4814]: I0216 10:05:11.740303 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v6xwq" event={"ID":"51879c30-795f-4f27-8018-fdafbafd8a4d","Type":"ContainerStarted","Data":"9935e9017a8433d165a7acb00287229ed0d2b0a45d3dfe013e338a04b0094854"} Feb 16 10:05:11 crc kubenswrapper[4814]: I0216 10:05:11.741967 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6eec7640-cb34-4716-90e6-36e4ba140f8f","Type":"ContainerStarted","Data":"7e26fae1c0ff163136164077af6081c5d138a0882a93f5027ce5f5a59ff17c94"} Feb 16 10:05:11 crc kubenswrapper[4814]: I0216 10:05:11.743855 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc2nv" event={"ID":"7de6150f-ee9f-437c-8813-4255d2533e45","Type":"ContainerStarted","Data":"b70e2c61d22694ac3d3a2df895134e4a133ad30cea5a878f90fcd6f92070d6fa"} Feb 16 10:05:12 crc kubenswrapper[4814]: E0216 10:05:12.129963 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 10:05:12 crc kubenswrapper[4814]: E0216 10:05:12.130032 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Feb 16 10:05:12 crc kubenswrapper[4814]: E0216 10:05:12.130428 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n77hb9hddhdfhf5h5cch698h578h5f8h675h5c5hdch97h5bch59bh5b6h55h5bch556hb5h599h8dhc8h667h59ch659h578hcfh5c7h9dh645h554q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v27b8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6fc5599df7-j66xj_openstack(f1e95b34-31fc-417c-a131-22b46dd4ede5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:05:12 crc kubenswrapper[4814]: E0216 10:05:12.131582 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" podUID="f1e95b34-31fc-417c-a131-22b46dd4ede5" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.250666 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.262091 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.325651 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-dns-svc\") pod \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.326388 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-config\") pod \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.326419 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-config\") pod \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.326476 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8sz4\" (UniqueName: \"kubernetes.io/projected/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-kube-api-access-t8sz4\") pod \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\" (UID: \"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754\") " Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.326511 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57mnq\" (UniqueName: \"kubernetes.io/projected/7650f95b-47ff-4fc3-8f0f-55323141d2ed-kube-api-access-57mnq\") pod \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.326598 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-dns-svc\") pod \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\" (UID: \"7650f95b-47ff-4fc3-8f0f-55323141d2ed\") " Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.326641 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd9d10a8-ee0b-41f1-af0b-04b9d23a8754" (UID: "dd9d10a8-ee0b-41f1-af0b-04b9d23a8754"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.327113 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-config" (OuterVolumeSpecName: "config") pod "dd9d10a8-ee0b-41f1-af0b-04b9d23a8754" (UID: "dd9d10a8-ee0b-41f1-af0b-04b9d23a8754"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.327494 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-config" (OuterVolumeSpecName: "config") pod "7650f95b-47ff-4fc3-8f0f-55323141d2ed" (UID: "7650f95b-47ff-4fc3-8f0f-55323141d2ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.327823 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7650f95b-47ff-4fc3-8f0f-55323141d2ed" (UID: "7650f95b-47ff-4fc3-8f0f-55323141d2ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.333008 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.333044 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.333053 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.333061 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7650f95b-47ff-4fc3-8f0f-55323141d2ed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.354816 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7650f95b-47ff-4fc3-8f0f-55323141d2ed-kube-api-access-57mnq" (OuterVolumeSpecName: "kube-api-access-57mnq") pod "7650f95b-47ff-4fc3-8f0f-55323141d2ed" (UID: "7650f95b-47ff-4fc3-8f0f-55323141d2ed"). InnerVolumeSpecName "kube-api-access-57mnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.357775 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-kube-api-access-t8sz4" (OuterVolumeSpecName: "kube-api-access-t8sz4") pod "dd9d10a8-ee0b-41f1-af0b-04b9d23a8754" (UID: "dd9d10a8-ee0b-41f1-af0b-04b9d23a8754"). InnerVolumeSpecName "kube-api-access-t8sz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.439182 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8sz4\" (UniqueName: \"kubernetes.io/projected/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754-kube-api-access-t8sz4\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.439248 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57mnq\" (UniqueName: \"kubernetes.io/projected/7650f95b-47ff-4fc3-8f0f-55323141d2ed-kube-api-access-57mnq\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.750319 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.758693 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" event={"ID":"dd9d10a8-ee0b-41f1-af0b-04b9d23a8754","Type":"ContainerDied","Data":"ff0fb2847426f839e1998ae927c762878cb5682f5c97de8a264741b5c4351036"} Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.758715 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dcdf6b57c-fplvz" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.760966 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866784dbf-578xk" event={"ID":"7650f95b-47ff-4fc3-8f0f-55323141d2ed","Type":"ContainerDied","Data":"851176f30cc6d39ba724effbc011674b9df205be8209818ba9f0d93133a7b325"} Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.761047 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866784dbf-578xk" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.764371 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" event={"ID":"906bee2c-78af-408e-9a66-693e9471cfa3","Type":"ContainerDied","Data":"28caed88c1cf0a6f8987e1c9b5eb9bb08e68e8ccd42fda070a728a8a9a7e85b5"} Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.764407 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd759bbbf-xj9pk" Feb 16 10:05:12 crc kubenswrapper[4814]: E0216 10:05:12.770163 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" podUID="f1e95b34-31fc-417c-a131-22b46dd4ede5" Feb 16 10:05:12 crc kubenswrapper[4814]: E0216 10:05:12.770414 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" podUID="d034d4b0-fd39-4862-bfa9-103f3a8da5dc" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.845495 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906bee2c-78af-408e-9a66-693e9471cfa3-config\") pod \"906bee2c-78af-408e-9a66-693e9471cfa3\" (UID: \"906bee2c-78af-408e-9a66-693e9471cfa3\") " Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.845670 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9glv\" (UniqueName: \"kubernetes.io/projected/906bee2c-78af-408e-9a66-693e9471cfa3-kube-api-access-c9glv\") pod \"906bee2c-78af-408e-9a66-693e9471cfa3\" (UID: \"906bee2c-78af-408e-9a66-693e9471cfa3\") " Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.850965 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/906bee2c-78af-408e-9a66-693e9471cfa3-config" (OuterVolumeSpecName: "config") pod "906bee2c-78af-408e-9a66-693e9471cfa3" (UID: "906bee2c-78af-408e-9a66-693e9471cfa3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.851956 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906bee2c-78af-408e-9a66-693e9471cfa3-kube-api-access-c9glv" (OuterVolumeSpecName: "kube-api-access-c9glv") pod "906bee2c-78af-408e-9a66-693e9471cfa3" (UID: "906bee2c-78af-408e-9a66-693e9471cfa3"). InnerVolumeSpecName "kube-api-access-c9glv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.914917 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-866784dbf-578xk"] Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.922418 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-866784dbf-578xk"] Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.937599 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dcdf6b57c-fplvz"] Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.945054 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dcdf6b57c-fplvz"] Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.952503 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/906bee2c-78af-408e-9a66-693e9471cfa3-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:12 crc kubenswrapper[4814]: I0216 10:05:12.952567 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9glv\" (UniqueName: \"kubernetes.io/projected/906bee2c-78af-408e-9a66-693e9471cfa3-kube-api-access-c9glv\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:13 crc kubenswrapper[4814]: I0216 10:05:13.008811 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7650f95b-47ff-4fc3-8f0f-55323141d2ed" path="/var/lib/kubelet/pods/7650f95b-47ff-4fc3-8f0f-55323141d2ed/volumes" Feb 16 10:05:13 crc kubenswrapper[4814]: I0216 10:05:13.009206 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd9d10a8-ee0b-41f1-af0b-04b9d23a8754" path="/var/lib/kubelet/pods/dd9d10a8-ee0b-41f1-af0b-04b9d23a8754/volumes" Feb 16 10:05:13 crc kubenswrapper[4814]: I0216 10:05:13.137978 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bd759bbbf-xj9pk"] Feb 16 10:05:13 crc kubenswrapper[4814]: I0216 10:05:13.149429 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bd759bbbf-xj9pk"] Feb 16 10:05:15 crc kubenswrapper[4814]: I0216 10:05:15.006507 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906bee2c-78af-408e-9a66-693e9471cfa3" path="/var/lib/kubelet/pods/906bee2c-78af-408e-9a66-693e9471cfa3/volumes" Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.833072 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc2nv" event={"ID":"7de6150f-ee9f-437c-8813-4255d2533e45","Type":"ContainerStarted","Data":"133ad2b303e70d9dcaa224305cc1debdb2785c718601d025c7c3e28e5ed26d19"} Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.833662 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-dc2nv" Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.837235 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"5fdd7785-aaf8-4454-b063-9723065293b7","Type":"ContainerStarted","Data":"7417167cdd7d5c29e2880ca4f7cd6c9124b832c81960a442fba4614766f8b286"} Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.837526 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.841366 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9","Type":"ContainerStarted","Data":"2fcd3b1cdfdd9bf02547588ce33779889e5ee287cfaa6c215f33085cb08a8b44"} Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.850368 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v6xwq" event={"ID":"51879c30-795f-4f27-8018-fdafbafd8a4d","Type":"ContainerStarted","Data":"319dc3a48c602f86fc241547c3d812303d9b61ba784d3fd6ceccfb938dde25ee"} Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.854268 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"55140aa6-2437-463c-be2e-0fa6735ee321","Type":"ContainerStarted","Data":"fd8e5ec3b18bccf213bc27a1e20c585795b6685cfc381f7b958ae9d75e245297"} Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.854388 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.860862 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1","Type":"ContainerStarted","Data":"1ac9641c2410a485d279b28baf28d50eef87460ad8a52298912e3756783e5d47"} Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.864719 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"54151705-0c05-4e03-99d4-9dc9d4a37de7","Type":"ContainerStarted","Data":"71be41e254df963886500f449476052102c05e7551bddd6e73231e4b3e990164"} Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.869934 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6eec7640-cb34-4716-90e6-36e4ba140f8f","Type":"ContainerStarted","Data":"452089f09dcc72d0b932e2fddcf985a3f81c2d1848183d038c29c97c6075f47d"} Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.870083 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dc2nv" podStartSLOduration=33.612081964 podStartE2EDuration="39.870054575s" podCreationTimestamp="2026-02-16 10:04:39 +0000 UTC" firstStartedPulling="2026-02-16 10:05:11.321732163 +0000 UTC m=+1169.014888343" lastFinishedPulling="2026-02-16 10:05:17.579704774 +0000 UTC m=+1175.272860954" observedRunningTime="2026-02-16 10:05:18.855267656 +0000 UTC m=+1176.548423846" watchObservedRunningTime="2026-02-16 10:05:18.870054575 +0000 UTC m=+1176.563210755" Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.908631 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=24.624180664 podStartE2EDuration="43.90860552s" podCreationTimestamp="2026-02-16 10:04:35 +0000 UTC" firstStartedPulling="2026-02-16 10:04:58.326002637 +0000 UTC m=+1156.019158817" lastFinishedPulling="2026-02-16 10:05:17.610427493 +0000 UTC m=+1175.303583673" observedRunningTime="2026-02-16 10:05:18.90679931 +0000 UTC m=+1176.599955510" watchObservedRunningTime="2026-02-16 10:05:18.90860552 +0000 UTC m=+1176.601761710" Feb 16 10:05:18 crc kubenswrapper[4814]: I0216 10:05:18.933392 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.682704428 podStartE2EDuration="46.933364645s" podCreationTimestamp="2026-02-16 10:04:32 +0000 UTC" firstStartedPulling="2026-02-16 10:04:34.331446992 +0000 UTC m=+1132.024603182" lastFinishedPulling="2026-02-16 10:05:17.582107219 +0000 UTC m=+1175.275263399" observedRunningTime="2026-02-16 10:05:18.932103759 +0000 UTC m=+1176.625259939" watchObservedRunningTime="2026-02-16 10:05:18.933364645 +0000 UTC m=+1176.626520825" Feb 16 10:05:19 crc kubenswrapper[4814]: I0216 10:05:19.881738 4814 generic.go:334] "Generic (PLEG): container finished" podID="51879c30-795f-4f27-8018-fdafbafd8a4d" containerID="319dc3a48c602f86fc241547c3d812303d9b61ba784d3fd6ceccfb938dde25ee" exitCode=0 Feb 16 10:05:19 crc kubenswrapper[4814]: I0216 10:05:19.881828 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v6xwq" event={"ID":"51879c30-795f-4f27-8018-fdafbafd8a4d","Type":"ContainerDied","Data":"319dc3a48c602f86fc241547c3d812303d9b61ba784d3fd6ceccfb938dde25ee"} Feb 16 10:05:20 crc kubenswrapper[4814]: I0216 10:05:20.894718 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b4e759af-f091-47c0-accc-c68b45b277fa","Type":"ContainerStarted","Data":"1846349bf7b7f4f56f152afa89867acf1d900891cbd2de6857821f736a54caf8"} Feb 16 10:05:20 crc kubenswrapper[4814]: I0216 10:05:20.896857 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"6a0b4bfb-2144-4fd9-be15-07396c44a11c","Type":"ContainerStarted","Data":"081312a7b74d0b18ddb6ccf7b84fb9a8efe3dbf13e3158633b3976c9be4a3e20"} Feb 16 10:05:21 crc kubenswrapper[4814]: I0216 10:05:21.924156 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19661670-37f9-4577-93d4-cd87303f3008","Type":"ContainerStarted","Data":"366bd9fdc38de7d4c51396d0d63626f375267d4b7c2ed227d24d4be7f09654d0"} Feb 16 10:05:21 crc kubenswrapper[4814]: I0216 10:05:21.933055 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9320085e-0598-4822-aa1d-5b2f9469f573","Type":"ContainerStarted","Data":"077c41c360689d8f2e76ccda73a35ea7fde697cbec2d1cd364fa21bf2abe4717"} Feb 16 10:05:21 crc kubenswrapper[4814]: I0216 10:05:21.938006 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"687aef9d-288e-47b4-9f5f-1ec1bd5b17f9","Type":"ContainerStarted","Data":"4edda719d21ad90134946769932c3613d3c2b53d14c5071df7f2db3c3df118a3"} Feb 16 10:05:21 crc kubenswrapper[4814]: I0216 10:05:21.941357 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v6xwq" event={"ID":"51879c30-795f-4f27-8018-fdafbafd8a4d","Type":"ContainerStarted","Data":"1594d4efee06282950e280078b5c378d71448a1a510be3760f12f96c757dacd1"} Feb 16 10:05:21 crc kubenswrapper[4814]: I0216 10:05:21.943658 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6eec7640-cb34-4716-90e6-36e4ba140f8f","Type":"ContainerStarted","Data":"c64be46b9e8560efdd62433cd34bd5e26bc47f25e8276693aa038205e65f1794"} Feb 16 10:05:21 crc kubenswrapper[4814]: I0216 10:05:21.995312 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=28.952313827 podStartE2EDuration="44.995280427s" podCreationTimestamp="2026-02-16 10:04:37 +0000 UTC" firstStartedPulling="2026-02-16 10:05:05.331019616 +0000 UTC m=+1163.024175796" lastFinishedPulling="2026-02-16 10:05:21.373986216 +0000 UTC m=+1179.067142396" observedRunningTime="2026-02-16 10:05:21.986512045 +0000 UTC m=+1179.679668235" watchObservedRunningTime="2026-02-16 10:05:21.995280427 +0000 UTC m=+1179.688436607" Feb 16 10:05:22 crc kubenswrapper[4814]: I0216 10:05:22.053041 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=30.364183693 podStartE2EDuration="40.053011752s" podCreationTimestamp="2026-02-16 10:04:42 +0000 UTC" firstStartedPulling="2026-02-16 10:05:11.677242658 +0000 UTC m=+1169.370398838" lastFinishedPulling="2026-02-16 10:05:21.366070717 +0000 UTC m=+1179.059226897" observedRunningTime="2026-02-16 10:05:22.052150228 +0000 UTC m=+1179.745306408" watchObservedRunningTime="2026-02-16 10:05:22.053011752 +0000 UTC m=+1179.746167932" Feb 16 10:05:22 crc kubenswrapper[4814]: I0216 10:05:22.625510 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 10:05:22 crc kubenswrapper[4814]: I0216 10:05:22.670910 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 10:05:22 crc kubenswrapper[4814]: I0216 10:05:22.959212 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v6xwq" event={"ID":"51879c30-795f-4f27-8018-fdafbafd8a4d","Type":"ContainerStarted","Data":"2df98b1e0943a44d9143cfb56fbc51eb4d222c5678223ff111841668ff493d6a"} Feb 16 10:05:22 crc kubenswrapper[4814]: I0216 10:05:22.960886 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 10:05:22 crc kubenswrapper[4814]: I0216 10:05:22.960921 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:05:22 crc kubenswrapper[4814]: I0216 10:05:22.960946 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:05:22 crc kubenswrapper[4814]: I0216 10:05:22.991039 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-v6xwq" podStartSLOduration=38.230743669 podStartE2EDuration="43.991014205s" podCreationTimestamp="2026-02-16 10:04:39 +0000 UTC" firstStartedPulling="2026-02-16 10:05:11.675115349 +0000 UTC m=+1169.368271529" lastFinishedPulling="2026-02-16 10:05:17.435385885 +0000 UTC m=+1175.128542065" observedRunningTime="2026-02-16 10:05:22.988052104 +0000 UTC m=+1180.681208284" watchObservedRunningTime="2026-02-16 10:05:22.991014205 +0000 UTC m=+1180.684170375" Feb 16 10:05:23 crc kubenswrapper[4814]: I0216 10:05:23.162955 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 10:05:23 crc kubenswrapper[4814]: I0216 10:05:23.509560 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 10:05:23 crc kubenswrapper[4814]: I0216 10:05:23.509634 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 10:05:23 crc kubenswrapper[4814]: I0216 10:05:23.739822 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 10:05:23 crc kubenswrapper[4814]: I0216 10:05:23.741849 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 10:05:23 crc kubenswrapper[4814]: I0216 10:05:23.977870 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79ddd488bf-6cmnj"] Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.024621 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.057338 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c88cb84bc-8hmnh"] Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.059059 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.063379 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-w6zt6"] Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.069405 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.085315 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.091556 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.095805 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c88cb84bc-8hmnh"] Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.149328 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w6zt6"] Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.210897 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce3c611b-9142-4702-a356-b22606f5b935-config\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.210971 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce3c611b-9142-4702-a356-b22606f5b935-ovn-rundir\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.211031 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkbm\" (UniqueName: \"kubernetes.io/projected/ce3c611b-9142-4702-a356-b22606f5b935-kube-api-access-kdkbm\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.211085 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-ovsdbserver-nb\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.211111 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-config\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.211138 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8njjk\" (UniqueName: \"kubernetes.io/projected/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-kube-api-access-8njjk\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.211173 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce3c611b-9142-4702-a356-b22606f5b935-ovs-rundir\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.211201 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce3c611b-9142-4702-a356-b22606f5b935-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.211237 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-dns-svc\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.211303 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3c611b-9142-4702-a356-b22606f5b935-combined-ca-bundle\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.312946 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3c611b-9142-4702-a356-b22606f5b935-combined-ca-bundle\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.313039 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce3c611b-9142-4702-a356-b22606f5b935-config\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.313062 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce3c611b-9142-4702-a356-b22606f5b935-ovn-rundir\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.313108 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdkbm\" (UniqueName: \"kubernetes.io/projected/ce3c611b-9142-4702-a356-b22606f5b935-kube-api-access-kdkbm\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.313157 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-ovsdbserver-nb\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.313187 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-config\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.313244 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8njjk\" (UniqueName: \"kubernetes.io/projected/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-kube-api-access-8njjk\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.313273 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce3c611b-9142-4702-a356-b22606f5b935-ovs-rundir\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.313296 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce3c611b-9142-4702-a356-b22606f5b935-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.313339 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-dns-svc\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.314455 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-dns-svc\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.315816 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ce3c611b-9142-4702-a356-b22606f5b935-ovn-rundir\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.315929 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ce3c611b-9142-4702-a356-b22606f5b935-ovs-rundir\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.316922 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce3c611b-9142-4702-a356-b22606f5b935-config\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.316953 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-ovsdbserver-nb\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.317033 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-config\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.325783 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce3c611b-9142-4702-a356-b22606f5b935-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.326898 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce3c611b-9142-4702-a356-b22606f5b935-combined-ca-bundle\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.338757 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8njjk\" (UniqueName: \"kubernetes.io/projected/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-kube-api-access-8njjk\") pod \"dnsmasq-dns-6c88cb84bc-8hmnh\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.339275 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdkbm\" (UniqueName: \"kubernetes.io/projected/ce3c611b-9142-4702-a356-b22606f5b935-kube-api-access-kdkbm\") pod \"ovn-controller-metrics-w6zt6\" (UID: \"ce3c611b-9142-4702-a356-b22606f5b935\") " pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.421683 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.431973 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-w6zt6" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.586815 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fc5599df7-j66xj"] Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.632232 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554cc8c86f-t5x22"] Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.634631 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.642108 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.669170 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554cc8c86f-t5x22"] Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.721676 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.723728 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.734837 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.735378 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.740589 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-d25z2" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.742380 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.745057 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.746999 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-config\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.747125 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.747214 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-scripts\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.747319 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-config\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.747434 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jffgb\" (UniqueName: \"kubernetes.io/projected/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-kube-api-access-jffgb\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.747511 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-nb\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.747598 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-sb\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.747723 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-dns-svc\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.747816 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.748053 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.748132 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh9sp\" (UniqueName: \"kubernetes.io/projected/50747a7c-9e5f-4840-8878-62119b6ff4a6-kube-api-access-nh9sp\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.748426 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.819071 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.853260 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-config\") pod \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.853374 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnvgp\" (UniqueName: \"kubernetes.io/projected/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-kube-api-access-bnvgp\") pod \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.854204 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-dns-svc\") pod \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\" (UID: \"d034d4b0-fd39-4862-bfa9-103f3a8da5dc\") " Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.854594 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-config\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.854660 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.854727 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-scripts\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.854807 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-config\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.854797 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-config" (OuterVolumeSpecName: "config") pod "d034d4b0-fd39-4862-bfa9-103f3a8da5dc" (UID: "d034d4b0-fd39-4862-bfa9-103f3a8da5dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.854834 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jffgb\" (UniqueName: \"kubernetes.io/projected/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-kube-api-access-jffgb\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.854952 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-nb\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.854995 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-sb\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.855212 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-dns-svc\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.855296 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.855702 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d034d4b0-fd39-4862-bfa9-103f3a8da5dc" (UID: "d034d4b0-fd39-4862-bfa9-103f3a8da5dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.855756 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.855783 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh9sp\" (UniqueName: \"kubernetes.io/projected/50747a7c-9e5f-4840-8878-62119b6ff4a6-kube-api-access-nh9sp\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.855853 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.856047 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.856060 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.857416 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-sb\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.858326 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.858698 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-scripts\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.859125 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-config\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.861257 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-nb\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.861258 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-dns-svc\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.864290 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-config\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.867434 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-kube-api-access-bnvgp" (OuterVolumeSpecName: "kube-api-access-bnvgp") pod "d034d4b0-fd39-4862-bfa9-103f3a8da5dc" (UID: "d034d4b0-fd39-4862-bfa9-103f3a8da5dc"). InnerVolumeSpecName "kube-api-access-bnvgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.867597 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.867849 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.868234 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.878757 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jffgb\" (UniqueName: \"kubernetes.io/projected/3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31-kube-api-access-jffgb\") pod \"ovn-northd-0\" (UID: \"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31\") " pod="openstack/ovn-northd-0" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.902724 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh9sp\" (UniqueName: \"kubernetes.io/projected/50747a7c-9e5f-4840-8878-62119b6ff4a6-kube-api-access-nh9sp\") pod \"dnsmasq-dns-554cc8c86f-t5x22\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.957696 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnvgp\" (UniqueName: \"kubernetes.io/projected/d034d4b0-fd39-4862-bfa9-103f3a8da5dc-kube-api-access-bnvgp\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.984635 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" Feb 16 10:05:24 crc kubenswrapper[4814]: I0216 10:05:24.989362 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79ddd488bf-6cmnj" event={"ID":"d034d4b0-fd39-4862-bfa9-103f3a8da5dc","Type":"ContainerDied","Data":"9babd8a8e848ec63058fc8aff1b3a24b866b38b25a290e07c1704fc06f6240b7"} Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.001453 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.100370 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.101236 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79ddd488bf-6cmnj"] Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.111267 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79ddd488bf-6cmnj"] Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.241761 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.373646 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v27b8\" (UniqueName: \"kubernetes.io/projected/f1e95b34-31fc-417c-a131-22b46dd4ede5-kube-api-access-v27b8\") pod \"f1e95b34-31fc-417c-a131-22b46dd4ede5\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.373776 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-config\") pod \"f1e95b34-31fc-417c-a131-22b46dd4ede5\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.373871 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-dns-svc\") pod \"f1e95b34-31fc-417c-a131-22b46dd4ede5\" (UID: \"f1e95b34-31fc-417c-a131-22b46dd4ede5\") " Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.374976 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1e95b34-31fc-417c-a131-22b46dd4ede5" (UID: "f1e95b34-31fc-417c-a131-22b46dd4ede5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.376086 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-config" (OuterVolumeSpecName: "config") pod "f1e95b34-31fc-417c-a131-22b46dd4ede5" (UID: "f1e95b34-31fc-417c-a131-22b46dd4ede5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.399961 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e95b34-31fc-417c-a131-22b46dd4ede5-kube-api-access-v27b8" (OuterVolumeSpecName: "kube-api-access-v27b8") pod "f1e95b34-31fc-417c-a131-22b46dd4ede5" (UID: "f1e95b34-31fc-417c-a131-22b46dd4ede5"). InnerVolumeSpecName "kube-api-access-v27b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.421700 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c88cb84bc-8hmnh"] Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.477479 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v27b8\" (UniqueName: \"kubernetes.io/projected/f1e95b34-31fc-417c-a131-22b46dd4ede5-kube-api-access-v27b8\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.477521 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.477553 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1e95b34-31fc-417c-a131-22b46dd4ede5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.524666 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-w6zt6"] Feb 16 10:05:25 crc kubenswrapper[4814]: W0216 10:05:25.532453 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce3c611b_9142_4702_a356_b22606f5b935.slice/crio-dbdce38f91a51ce9531aed105be28a589feeee3197d0c7983642ace50e19dc36 WatchSource:0}: Error finding container dbdce38f91a51ce9531aed105be28a589feeee3197d0c7983642ace50e19dc36: Status 404 returned error can't find the container with id dbdce38f91a51ce9531aed105be28a589feeee3197d0c7983642ace50e19dc36 Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.725039 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554cc8c86f-t5x22"] Feb 16 10:05:25 crc kubenswrapper[4814]: I0216 10:05:25.938985 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.069648 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w6zt6" event={"ID":"ce3c611b-9142-4702-a356-b22606f5b935","Type":"ContainerStarted","Data":"dbdce38f91a51ce9531aed105be28a589feeee3197d0c7983642ace50e19dc36"} Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.104809 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" event={"ID":"50747a7c-9e5f-4840-8878-62119b6ff4a6","Type":"ContainerStarted","Data":"6d196518ec23195c96a9a5afba3797bf3b2b9469eb07ef1f4ccba0f13411b366"} Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.135751 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" event={"ID":"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8","Type":"ContainerStarted","Data":"610aa7602066b615458f74b0b64648a98b6f18a413c7514bbe34583903a748a8"} Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.155055 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c88cb84bc-8hmnh"] Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.167635 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" event={"ID":"f1e95b34-31fc-417c-a131-22b46dd4ede5","Type":"ContainerDied","Data":"a0e0c9887572af18f874838e9587333649fe20eaa059c0f5832bc3ab979e3789"} Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.167801 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc5599df7-j66xj" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.184137 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.194393 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31","Type":"ContainerStarted","Data":"f30eb5a0823ada2d3f09508998db4f5dca350f6a8171a896266d29e163bdd481"} Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.210653 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9cd786565-5w9lt"] Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.212797 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.238146 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9cd786565-5w9lt"] Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.299158 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-sb\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.299318 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcmw\" (UniqueName: \"kubernetes.io/projected/22837145-ddd2-4606-bc52-d633720bdeb2-kube-api-access-dfcmw\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.299388 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-config\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.299413 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-nb\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.299523 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-dns-svc\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.371134 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fc5599df7-j66xj"] Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.399273 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fc5599df7-j66xj"] Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.402133 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-config\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.402190 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-nb\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.402309 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-dns-svc\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.402336 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-sb\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.402387 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcmw\" (UniqueName: \"kubernetes.io/projected/22837145-ddd2-4606-bc52-d633720bdeb2-kube-api-access-dfcmw\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.403875 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-config\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.409625 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-dns-svc\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.409768 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-sb\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.433387 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-nb\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.444002 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcmw\" (UniqueName: \"kubernetes.io/projected/22837145-ddd2-4606-bc52-d633720bdeb2-kube-api-access-dfcmw\") pod \"dnsmasq-dns-9cd786565-5w9lt\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:26 crc kubenswrapper[4814]: I0216 10:05:26.571343 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.007867 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d034d4b0-fd39-4862-bfa9-103f3a8da5dc" path="/var/lib/kubelet/pods/d034d4b0-fd39-4862-bfa9-103f3a8da5dc/volumes" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.008933 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e95b34-31fc-417c-a131-22b46dd4ede5" path="/var/lib/kubelet/pods/f1e95b34-31fc-417c-a131-22b46dd4ede5/volumes" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.129326 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9cd786565-5w9lt"] Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.204456 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-w6zt6" event={"ID":"ce3c611b-9142-4702-a356-b22606f5b935","Type":"ContainerStarted","Data":"ec4ab28d57cdba75f3babfeb9db49e2bf747a54790593ee47103d3fbc5cc1e2f"} Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.209171 4814 generic.go:334] "Generic (PLEG): container finished" podID="50747a7c-9e5f-4840-8878-62119b6ff4a6" containerID="e2d49015fd2798910ed78eb3721eb7ed25e456f23f62adbb4b186bdee092c14e" exitCode=0 Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.209209 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" event={"ID":"50747a7c-9e5f-4840-8878-62119b6ff4a6","Type":"ContainerDied","Data":"e2d49015fd2798910ed78eb3721eb7ed25e456f23f62adbb4b186bdee092c14e"} Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.215973 4814 generic.go:334] "Generic (PLEG): container finished" podID="1f08ad0e-1d46-4908-abff-dc63c5e3c6a8" containerID="bad64ba221e682e5a67ea474a508466a10039b5b420ee1ed9a1f5ef9a154cbf6" exitCode=0 Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.216045 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" event={"ID":"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8","Type":"ContainerDied","Data":"bad64ba221e682e5a67ea474a508466a10039b5b420ee1ed9a1f5ef9a154cbf6"} Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.251919 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-w6zt6" podStartSLOduration=3.251887253 podStartE2EDuration="3.251887253s" podCreationTimestamp="2026-02-16 10:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:05:27.228445186 +0000 UTC m=+1184.921601366" watchObservedRunningTime="2026-02-16 10:05:27.251887253 +0000 UTC m=+1184.945043433" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.314810 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.343546 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.343757 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.346934 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.347003 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.347331 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.347335 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-s5n5k" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.536626 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzcwh\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-kube-api-access-vzcwh\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.536818 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-lock\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.536850 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.537048 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.537177 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.537219 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-cache\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.640749 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.641257 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-cache\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.641344 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzcwh\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-kube-api-access-vzcwh\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.641431 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.641455 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-lock\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.641553 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: E0216 10:05:27.641003 4814 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 10:05:27 crc kubenswrapper[4814]: E0216 10:05:27.641919 4814 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 10:05:27 crc kubenswrapper[4814]: E0216 10:05:27.641989 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift podName:33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36 nodeName:}" failed. No retries permitted until 2026-02-16 10:05:28.141960883 +0000 UTC m=+1185.835117063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift") pod "swift-storage-0" (UID: "33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36") : configmap "swift-ring-files" not found Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.642423 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-cache\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.643153 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-lock\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.643649 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.650953 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.668250 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzcwh\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-kube-api-access-vzcwh\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.692802 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.781284 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.946870 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-ovsdbserver-nb\") pod \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.946987 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-config\") pod \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.947032 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8njjk\" (UniqueName: \"kubernetes.io/projected/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-kube-api-access-8njjk\") pod \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.947178 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-dns-svc\") pod \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\" (UID: \"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8\") " Feb 16 10:05:27 crc kubenswrapper[4814]: I0216 10:05:27.981269 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-kube-api-access-8njjk" (OuterVolumeSpecName: "kube-api-access-8njjk") pod "1f08ad0e-1d46-4908-abff-dc63c5e3c6a8" (UID: "1f08ad0e-1d46-4908-abff-dc63c5e3c6a8"). InnerVolumeSpecName "kube-api-access-8njjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.008524 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-68zpk"] Feb 16 10:05:28 crc kubenswrapper[4814]: E0216 10:05:28.009082 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f08ad0e-1d46-4908-abff-dc63c5e3c6a8" containerName="init" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.009098 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f08ad0e-1d46-4908-abff-dc63c5e3c6a8" containerName="init" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.009325 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f08ad0e-1d46-4908-abff-dc63c5e3c6a8" containerName="init" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.010176 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.015969 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.016149 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.016278 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.019267 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f08ad0e-1d46-4908-abff-dc63c5e3c6a8" (UID: "1f08ad0e-1d46-4908-abff-dc63c5e3c6a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.026605 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-68zpk"] Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.042273 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-config" (OuterVolumeSpecName: "config") pod "1f08ad0e-1d46-4908-abff-dc63c5e3c6a8" (UID: "1f08ad0e-1d46-4908-abff-dc63c5e3c6a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.057300 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.057680 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8njjk\" (UniqueName: \"kubernetes.io/projected/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-kube-api-access-8njjk\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.058081 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.068618 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1f08ad0e-1d46-4908-abff-dc63c5e3c6a8" (UID: "1f08ad0e-1d46-4908-abff-dc63c5e3c6a8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.159690 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.159745 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-combined-ca-bundle\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.159792 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-dispersionconf\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.159861 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-scripts\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.159888 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-ring-data-devices\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.159951 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f89153bb-4a9e-419a-b142-b339a0797d78-etc-swift\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.159997 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdl56\" (UniqueName: \"kubernetes.io/projected/f89153bb-4a9e-419a-b142-b339a0797d78-kube-api-access-cdl56\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.160032 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-swiftconf\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.160093 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:28 crc kubenswrapper[4814]: E0216 10:05:28.160239 4814 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 10:05:28 crc kubenswrapper[4814]: E0216 10:05:28.160254 4814 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 10:05:28 crc kubenswrapper[4814]: E0216 10:05:28.160302 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift podName:33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36 nodeName:}" failed. No retries permitted until 2026-02-16 10:05:29.160285738 +0000 UTC m=+1186.853442118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift") pod "swift-storage-0" (UID: "33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36") : configmap "swift-ring-files" not found Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.229178 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.229185 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c88cb84bc-8hmnh" event={"ID":"1f08ad0e-1d46-4908-abff-dc63c5e3c6a8","Type":"ContainerDied","Data":"610aa7602066b615458f74b0b64648a98b6f18a413c7514bbe34583903a748a8"} Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.229272 4814 scope.go:117] "RemoveContainer" containerID="bad64ba221e682e5a67ea474a508466a10039b5b420ee1ed9a1f5ef9a154cbf6" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.232012 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31","Type":"ContainerStarted","Data":"21d43247039e98dcec17ba575ee562e5d735c4c90a6e36bf2a65a91c5dd22524"} Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.245736 4814 generic.go:334] "Generic (PLEG): container finished" podID="22837145-ddd2-4606-bc52-d633720bdeb2" containerID="660f63563650a18fc7c3da50f29f077a141a28e2475b13c81a202bd978eae756" exitCode=0 Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.245846 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" event={"ID":"22837145-ddd2-4606-bc52-d633720bdeb2","Type":"ContainerDied","Data":"660f63563650a18fc7c3da50f29f077a141a28e2475b13c81a202bd978eae756"} Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.245881 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" event={"ID":"22837145-ddd2-4606-bc52-d633720bdeb2","Type":"ContainerStarted","Data":"26d79121d61b7133ab8c146fe5fe9f0decd447a4ac39c796e28f970730cc3fde"} Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.252763 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" event={"ID":"50747a7c-9e5f-4840-8878-62119b6ff4a6","Type":"ContainerStarted","Data":"25c1401ce4630f422939a508ec8558a998642e3f8ba6a341a66e79b041090ac0"} Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.253602 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.261994 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdl56\" (UniqueName: \"kubernetes.io/projected/f89153bb-4a9e-419a-b142-b339a0797d78-kube-api-access-cdl56\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.262063 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-swiftconf\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.262144 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-combined-ca-bundle\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.262183 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-dispersionconf\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.262226 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-scripts\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.262251 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-ring-data-devices\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.262333 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f89153bb-4a9e-419a-b142-b339a0797d78-etc-swift\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.263204 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f89153bb-4a9e-419a-b142-b339a0797d78-etc-swift\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.263567 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-ring-data-devices\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.266461 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-scripts\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.267081 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-swiftconf\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.267619 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-combined-ca-bundle\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.277993 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-dispersionconf\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.295390 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdl56\" (UniqueName: \"kubernetes.io/projected/f89153bb-4a9e-419a-b142-b339a0797d78-kube-api-access-cdl56\") pod \"swift-ring-rebalance-68zpk\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.317846 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" podStartSLOduration=3.985877689 podStartE2EDuration="4.317807402s" podCreationTimestamp="2026-02-16 10:05:24 +0000 UTC" firstStartedPulling="2026-02-16 10:05:25.740263247 +0000 UTC m=+1183.433419427" lastFinishedPulling="2026-02-16 10:05:26.07219296 +0000 UTC m=+1183.765349140" observedRunningTime="2026-02-16 10:05:28.296570435 +0000 UTC m=+1185.989726635" watchObservedRunningTime="2026-02-16 10:05:28.317807402 +0000 UTC m=+1186.010963592" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.353156 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.362069 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c88cb84bc-8hmnh"] Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.369988 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c88cb84bc-8hmnh"] Feb 16 10:05:28 crc kubenswrapper[4814]: I0216 10:05:28.911552 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-68zpk"] Feb 16 10:05:28 crc kubenswrapper[4814]: W0216 10:05:28.917426 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf89153bb_4a9e_419a_b142_b339a0797d78.slice/crio-f3d37ad1af9fd5898b94efacee7376f824ec42d56c5f292d57e792db582620d2 WatchSource:0}: Error finding container f3d37ad1af9fd5898b94efacee7376f824ec42d56c5f292d57e792db582620d2: Status 404 returned error can't find the container with id f3d37ad1af9fd5898b94efacee7376f824ec42d56c5f292d57e792db582620d2 Feb 16 10:05:29 crc kubenswrapper[4814]: I0216 10:05:29.014072 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f08ad0e-1d46-4908-abff-dc63c5e3c6a8" path="/var/lib/kubelet/pods/1f08ad0e-1d46-4908-abff-dc63c5e3c6a8/volumes" Feb 16 10:05:29 crc kubenswrapper[4814]: I0216 10:05:29.192513 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:29 crc kubenswrapper[4814]: E0216 10:05:29.192719 4814 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 10:05:29 crc kubenswrapper[4814]: E0216 10:05:29.192754 4814 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 10:05:29 crc kubenswrapper[4814]: E0216 10:05:29.192827 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift podName:33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36 nodeName:}" failed. No retries permitted until 2026-02-16 10:05:31.192800784 +0000 UTC m=+1188.885956954 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift") pod "swift-storage-0" (UID: "33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36") : configmap "swift-ring-files" not found Feb 16 10:05:29 crc kubenswrapper[4814]: I0216 10:05:29.282652 4814 generic.go:334] "Generic (PLEG): container finished" podID="9320085e-0598-4822-aa1d-5b2f9469f573" containerID="077c41c360689d8f2e76ccda73a35ea7fde697cbec2d1cd364fa21bf2abe4717" exitCode=0 Feb 16 10:05:29 crc kubenswrapper[4814]: I0216 10:05:29.282761 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9320085e-0598-4822-aa1d-5b2f9469f573","Type":"ContainerDied","Data":"077c41c360689d8f2e76ccda73a35ea7fde697cbec2d1cd364fa21bf2abe4717"} Feb 16 10:05:29 crc kubenswrapper[4814]: I0216 10:05:29.322978 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" event={"ID":"22837145-ddd2-4606-bc52-d633720bdeb2","Type":"ContainerStarted","Data":"867ba8ec1f5475ac03128d2fb1f5aa259505ad3f36d2e25c88b7f2aca22642ab"} Feb 16 10:05:29 crc kubenswrapper[4814]: I0216 10:05:29.323143 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:29 crc kubenswrapper[4814]: I0216 10:05:29.331939 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-68zpk" event={"ID":"f89153bb-4a9e-419a-b142-b339a0797d78","Type":"ContainerStarted","Data":"f3d37ad1af9fd5898b94efacee7376f824ec42d56c5f292d57e792db582620d2"} Feb 16 10:05:29 crc kubenswrapper[4814]: I0216 10:05:29.337527 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31","Type":"ContainerStarted","Data":"2dfca8ac0d9cd8638536c28f31581b3f7493f77703d9373b14ec12111639f64c"} Feb 16 10:05:29 crc kubenswrapper[4814]: I0216 10:05:29.337602 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 10:05:29 crc kubenswrapper[4814]: I0216 10:05:29.385504 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.5217913530000002 podStartE2EDuration="5.385478269s" podCreationTimestamp="2026-02-16 10:05:24 +0000 UTC" firstStartedPulling="2026-02-16 10:05:25.979797247 +0000 UTC m=+1183.672953427" lastFinishedPulling="2026-02-16 10:05:27.843484163 +0000 UTC m=+1185.536640343" observedRunningTime="2026-02-16 10:05:29.380109221 +0000 UTC m=+1187.073265391" watchObservedRunningTime="2026-02-16 10:05:29.385478269 +0000 UTC m=+1187.078634449" Feb 16 10:05:29 crc kubenswrapper[4814]: I0216 10:05:29.412332 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" podStartSLOduration=3.4123086799999998 podStartE2EDuration="3.41230868s" podCreationTimestamp="2026-02-16 10:05:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:05:29.406325455 +0000 UTC m=+1187.099481655" watchObservedRunningTime="2026-02-16 10:05:29.41230868 +0000 UTC m=+1187.105464860" Feb 16 10:05:31 crc kubenswrapper[4814]: I0216 10:05:31.235741 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:31 crc kubenswrapper[4814]: E0216 10:05:31.236045 4814 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 10:05:31 crc kubenswrapper[4814]: E0216 10:05:31.236099 4814 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 10:05:31 crc kubenswrapper[4814]: E0216 10:05:31.236189 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift podName:33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36 nodeName:}" failed. No retries permitted until 2026-02-16 10:05:35.236159836 +0000 UTC m=+1192.929316016 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift") pod "swift-storage-0" (UID: "33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36") : configmap "swift-ring-files" not found Feb 16 10:05:33 crc kubenswrapper[4814]: I0216 10:05:33.374012 4814 generic.go:334] "Generic (PLEG): container finished" podID="43c73c4c-5cdf-4b6d-93b0-afeb459b74c1" containerID="1ac9641c2410a485d279b28baf28d50eef87460ad8a52298912e3756783e5d47" exitCode=0 Feb 16 10:05:33 crc kubenswrapper[4814]: I0216 10:05:33.374160 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1","Type":"ContainerDied","Data":"1ac9641c2410a485d279b28baf28d50eef87460ad8a52298912e3756783e5d47"} Feb 16 10:05:33 crc kubenswrapper[4814]: I0216 10:05:33.376714 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-68zpk" event={"ID":"f89153bb-4a9e-419a-b142-b339a0797d78","Type":"ContainerStarted","Data":"d1618d236bbaed557c8c90c2802b65e22179b27c7b67f660c3b0ee2e3f1ce1a1"} Feb 16 10:05:33 crc kubenswrapper[4814]: I0216 10:05:33.441864 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-68zpk" podStartSLOduration=2.7649321049999998 podStartE2EDuration="6.441841424s" podCreationTimestamp="2026-02-16 10:05:27 +0000 UTC" firstStartedPulling="2026-02-16 10:05:28.921442354 +0000 UTC m=+1186.614598534" lastFinishedPulling="2026-02-16 10:05:32.598351673 +0000 UTC m=+1190.291507853" observedRunningTime="2026-02-16 10:05:33.440726993 +0000 UTC m=+1191.133883183" watchObservedRunningTime="2026-02-16 10:05:33.441841424 +0000 UTC m=+1191.134997604" Feb 16 10:05:34 crc kubenswrapper[4814]: I0216 10:05:34.395719 4814 generic.go:334] "Generic (PLEG): container finished" podID="54151705-0c05-4e03-99d4-9dc9d4a37de7" containerID="71be41e254df963886500f449476052102c05e7551bddd6e73231e4b3e990164" exitCode=0 Feb 16 10:05:34 crc kubenswrapper[4814]: I0216 10:05:34.395823 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"54151705-0c05-4e03-99d4-9dc9d4a37de7","Type":"ContainerDied","Data":"71be41e254df963886500f449476052102c05e7551bddd6e73231e4b3e990164"} Feb 16 10:05:34 crc kubenswrapper[4814]: I0216 10:05:34.403030 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43c73c4c-5cdf-4b6d-93b0-afeb459b74c1","Type":"ContainerStarted","Data":"23248fb6ed97cab691364b6b3369cd957a951260ddde366a2ab7d63f23186412"} Feb 16 10:05:34 crc kubenswrapper[4814]: I0216 10:05:34.457889 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.09054779 podStartE2EDuration="1m4.457859643s" podCreationTimestamp="2026-02-16 10:04:30 +0000 UTC" firstStartedPulling="2026-02-16 10:04:32.243160741 +0000 UTC m=+1129.936316921" lastFinishedPulling="2026-02-16 10:05:17.610472594 +0000 UTC m=+1175.303628774" observedRunningTime="2026-02-16 10:05:34.451704243 +0000 UTC m=+1192.144860443" watchObservedRunningTime="2026-02-16 10:05:34.457859643 +0000 UTC m=+1192.151015823" Feb 16 10:05:35 crc kubenswrapper[4814]: I0216 10:05:35.005655 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:35 crc kubenswrapper[4814]: I0216 10:05:35.244231 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:35 crc kubenswrapper[4814]: E0216 10:05:35.244545 4814 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 10:05:35 crc kubenswrapper[4814]: E0216 10:05:35.244584 4814 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 10:05:35 crc kubenswrapper[4814]: E0216 10:05:35.244673 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift podName:33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36 nodeName:}" failed. No retries permitted until 2026-02-16 10:05:43.244640967 +0000 UTC m=+1200.937797147 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift") pod "swift-storage-0" (UID: "33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36") : configmap "swift-ring-files" not found Feb 16 10:05:36 crc kubenswrapper[4814]: I0216 10:05:36.574708 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:05:36 crc kubenswrapper[4814]: I0216 10:05:36.635718 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554cc8c86f-t5x22"] Feb 16 10:05:36 crc kubenswrapper[4814]: I0216 10:05:36.636041 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" podUID="50747a7c-9e5f-4840-8878-62119b6ff4a6" containerName="dnsmasq-dns" containerID="cri-o://25c1401ce4630f422939a508ec8558a998642e3f8ba6a341a66e79b041090ac0" gracePeriod=10 Feb 16 10:05:37 crc kubenswrapper[4814]: I0216 10:05:37.960254 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:05:37 crc kubenswrapper[4814]: I0216 10:05:37.960665 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:05:38 crc kubenswrapper[4814]: I0216 10:05:38.445897 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"54151705-0c05-4e03-99d4-9dc9d4a37de7","Type":"ContainerStarted","Data":"c5a371d370d7060057ced21f7bd674ef4042979102213c7da7b8fc12c6ea8926"} Feb 16 10:05:39 crc kubenswrapper[4814]: I0216 10:05:39.456898 4814 generic.go:334] "Generic (PLEG): container finished" podID="50747a7c-9e5f-4840-8878-62119b6ff4a6" containerID="25c1401ce4630f422939a508ec8558a998642e3f8ba6a341a66e79b041090ac0" exitCode=0 Feb 16 10:05:39 crc kubenswrapper[4814]: I0216 10:05:39.458670 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" event={"ID":"50747a7c-9e5f-4840-8878-62119b6ff4a6","Type":"ContainerDied","Data":"25c1401ce4630f422939a508ec8558a998642e3f8ba6a341a66e79b041090ac0"} Feb 16 10:05:39 crc kubenswrapper[4814]: I0216 10:05:39.489992 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371968.364809 podStartE2EDuration="1m8.489967036s" podCreationTimestamp="2026-02-16 10:04:31 +0000 UTC" firstStartedPulling="2026-02-16 10:04:34.618998145 +0000 UTC m=+1132.312154325" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:05:39.484065163 +0000 UTC m=+1197.177221353" watchObservedRunningTime="2026-02-16 10:05:39.489967036 +0000 UTC m=+1197.183123216" Feb 16 10:05:40 crc kubenswrapper[4814]: E0216 10:05:40.045393 4814 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.73:51642->38.102.83.73:32925: write tcp 38.102.83.73:51642->38.102.83.73:32925: write: broken pipe Feb 16 10:05:41 crc kubenswrapper[4814]: I0216 10:05:41.631389 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 10:05:41 crc kubenswrapper[4814]: I0216 10:05:41.631785 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 10:05:43 crc kubenswrapper[4814]: I0216 10:05:43.249193 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 10:05:43 crc kubenswrapper[4814]: I0216 10:05:43.249621 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 10:05:43 crc kubenswrapper[4814]: I0216 10:05:43.332952 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:43 crc kubenswrapper[4814]: E0216 10:05:43.334185 4814 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 10:05:43 crc kubenswrapper[4814]: E0216 10:05:43.334207 4814 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 10:05:43 crc kubenswrapper[4814]: E0216 10:05:43.334245 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift podName:33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36 nodeName:}" failed. No retries permitted until 2026-02-16 10:05:59.334231289 +0000 UTC m=+1217.027387459 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift") pod "swift-storage-0" (UID: "33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36") : configmap "swift-ring-files" not found Feb 16 10:05:43 crc kubenswrapper[4814]: I0216 10:05:43.369959 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 10:05:43 crc kubenswrapper[4814]: I0216 10:05:43.668164 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.055304 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.216310 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.524701 4814 generic.go:334] "Generic (PLEG): container finished" podID="f89153bb-4a9e-419a-b142-b339a0797d78" containerID="d1618d236bbaed557c8c90c2802b65e22179b27c7b67f660c3b0ee2e3f1ce1a1" exitCode=0 Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.526064 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-68zpk" event={"ID":"f89153bb-4a9e-419a-b142-b339a0797d78","Type":"ContainerDied","Data":"d1618d236bbaed557c8c90c2802b65e22179b27c7b67f660c3b0ee2e3f1ce1a1"} Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.638086 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2vmt7"] Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.639711 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vmt7" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.647403 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2vmt7"] Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.686299 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hc8m\" (UniqueName: \"kubernetes.io/projected/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-kube-api-access-7hc8m\") pod \"placement-db-create-2vmt7\" (UID: \"57490ed3-3fac-4ecb-84b5-1017a06e0ca9\") " pod="openstack/placement-db-create-2vmt7" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.686470 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-operator-scripts\") pod \"placement-db-create-2vmt7\" (UID: \"57490ed3-3fac-4ecb-84b5-1017a06e0ca9\") " pod="openstack/placement-db-create-2vmt7" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.688350 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c82c-account-create-update-56mbg"] Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.689848 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c82c-account-create-update-56mbg" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.695026 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.705995 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c82c-account-create-update-56mbg"] Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.788962 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzcdw\" (UniqueName: \"kubernetes.io/projected/d69bc477-7bb4-4eb7-9598-119036f38586-kube-api-access-vzcdw\") pod \"placement-c82c-account-create-update-56mbg\" (UID: \"d69bc477-7bb4-4eb7-9598-119036f38586\") " pod="openstack/placement-c82c-account-create-update-56mbg" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.789037 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hc8m\" (UniqueName: \"kubernetes.io/projected/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-kube-api-access-7hc8m\") pod \"placement-db-create-2vmt7\" (UID: \"57490ed3-3fac-4ecb-84b5-1017a06e0ca9\") " pod="openstack/placement-db-create-2vmt7" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.789128 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-operator-scripts\") pod \"placement-db-create-2vmt7\" (UID: \"57490ed3-3fac-4ecb-84b5-1017a06e0ca9\") " pod="openstack/placement-db-create-2vmt7" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.789229 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69bc477-7bb4-4eb7-9598-119036f38586-operator-scripts\") pod \"placement-c82c-account-create-update-56mbg\" (UID: \"d69bc477-7bb4-4eb7-9598-119036f38586\") " pod="openstack/placement-c82c-account-create-update-56mbg" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.790585 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-operator-scripts\") pod \"placement-db-create-2vmt7\" (UID: \"57490ed3-3fac-4ecb-84b5-1017a06e0ca9\") " pod="openstack/placement-db-create-2vmt7" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.814315 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hc8m\" (UniqueName: \"kubernetes.io/projected/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-kube-api-access-7hc8m\") pod \"placement-db-create-2vmt7\" (UID: \"57490ed3-3fac-4ecb-84b5-1017a06e0ca9\") " pod="openstack/placement-db-create-2vmt7" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.891583 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69bc477-7bb4-4eb7-9598-119036f38586-operator-scripts\") pod \"placement-c82c-account-create-update-56mbg\" (UID: \"d69bc477-7bb4-4eb7-9598-119036f38586\") " pod="openstack/placement-c82c-account-create-update-56mbg" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.891707 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzcdw\" (UniqueName: \"kubernetes.io/projected/d69bc477-7bb4-4eb7-9598-119036f38586-kube-api-access-vzcdw\") pod \"placement-c82c-account-create-update-56mbg\" (UID: \"d69bc477-7bb4-4eb7-9598-119036f38586\") " pod="openstack/placement-c82c-account-create-update-56mbg" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.892808 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69bc477-7bb4-4eb7-9598-119036f38586-operator-scripts\") pod \"placement-c82c-account-create-update-56mbg\" (UID: \"d69bc477-7bb4-4eb7-9598-119036f38586\") " pod="openstack/placement-c82c-account-create-update-56mbg" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.913881 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzcdw\" (UniqueName: \"kubernetes.io/projected/d69bc477-7bb4-4eb7-9598-119036f38586-kube-api-access-vzcdw\") pod \"placement-c82c-account-create-update-56mbg\" (UID: \"d69bc477-7bb4-4eb7-9598-119036f38586\") " pod="openstack/placement-c82c-account-create-update-56mbg" Feb 16 10:05:44 crc kubenswrapper[4814]: I0216 10:05:44.957591 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vmt7" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.002521 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" podUID="50747a7c-9e5f-4840-8878-62119b6ff4a6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: i/o timeout" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.007430 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c82c-account-create-update-56mbg" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.191079 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.343591 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.402618 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-dns-svc\") pod \"50747a7c-9e5f-4840-8878-62119b6ff4a6\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.402794 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-nb\") pod \"50747a7c-9e5f-4840-8878-62119b6ff4a6\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.402932 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-sb\") pod \"50747a7c-9e5f-4840-8878-62119b6ff4a6\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.402977 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh9sp\" (UniqueName: \"kubernetes.io/projected/50747a7c-9e5f-4840-8878-62119b6ff4a6-kube-api-access-nh9sp\") pod \"50747a7c-9e5f-4840-8878-62119b6ff4a6\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.403043 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-config\") pod \"50747a7c-9e5f-4840-8878-62119b6ff4a6\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.423150 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50747a7c-9e5f-4840-8878-62119b6ff4a6-kube-api-access-nh9sp" (OuterVolumeSpecName: "kube-api-access-nh9sp") pod "50747a7c-9e5f-4840-8878-62119b6ff4a6" (UID: "50747a7c-9e5f-4840-8878-62119b6ff4a6"). InnerVolumeSpecName "kube-api-access-nh9sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.505782 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh9sp\" (UniqueName: \"kubernetes.io/projected/50747a7c-9e5f-4840-8878-62119b6ff4a6-kube-api-access-nh9sp\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.599389 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.600372 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" event={"ID":"50747a7c-9e5f-4840-8878-62119b6ff4a6","Type":"ContainerDied","Data":"6d196518ec23195c96a9a5afba3797bf3b2b9469eb07ef1f4ccba0f13411b366"} Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.600425 4814 scope.go:117] "RemoveContainer" containerID="25c1401ce4630f422939a508ec8558a998642e3f8ba6a341a66e79b041090ac0" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.633642 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50747a7c-9e5f-4840-8878-62119b6ff4a6" (UID: "50747a7c-9e5f-4840-8878-62119b6ff4a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.641469 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-config" (OuterVolumeSpecName: "config") pod "50747a7c-9e5f-4840-8878-62119b6ff4a6" (UID: "50747a7c-9e5f-4840-8878-62119b6ff4a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:45 crc kubenswrapper[4814]: E0216 10:05:45.668025 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-nb podName:50747a7c-9e5f-4840-8878-62119b6ff4a6 nodeName:}" failed. No retries permitted until 2026-02-16 10:05:46.167991877 +0000 UTC m=+1203.861148057 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-nb" (UniqueName: "kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-nb") pod "50747a7c-9e5f-4840-8878-62119b6ff4a6" (UID: "50747a7c-9e5f-4840-8878-62119b6ff4a6") : error deleting /var/lib/kubelet/pods/50747a7c-9e5f-4840-8878-62119b6ff4a6/volume-subpaths: remove /var/lib/kubelet/pods/50747a7c-9e5f-4840-8878-62119b6ff4a6/volume-subpaths: no such file or directory Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.668434 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50747a7c-9e5f-4840-8878-62119b6ff4a6" (UID: "50747a7c-9e5f-4840-8878-62119b6ff4a6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.713809 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.713856 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.713865 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:45 crc kubenswrapper[4814]: I0216 10:05:45.722770 4814 scope.go:117] "RemoveContainer" containerID="e2d49015fd2798910ed78eb3721eb7ed25e456f23f62adbb4b186bdee092c14e" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.127388 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-l5r5d"] Feb 16 10:05:46 crc kubenswrapper[4814]: E0216 10:05:46.128318 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50747a7c-9e5f-4840-8878-62119b6ff4a6" containerName="dnsmasq-dns" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.128332 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="50747a7c-9e5f-4840-8878-62119b6ff4a6" containerName="dnsmasq-dns" Feb 16 10:05:46 crc kubenswrapper[4814]: E0216 10:05:46.128366 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50747a7c-9e5f-4840-8878-62119b6ff4a6" containerName="init" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.128373 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="50747a7c-9e5f-4840-8878-62119b6ff4a6" containerName="init" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.128595 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="50747a7c-9e5f-4840-8878-62119b6ff4a6" containerName="dnsmasq-dns" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.129310 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-l5r5d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.138412 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-l5r5d"] Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.232813 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.236069 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-nb\") pod \"50747a7c-9e5f-4840-8878-62119b6ff4a6\" (UID: \"50747a7c-9e5f-4840-8878-62119b6ff4a6\") " Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.237636 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "50747a7c-9e5f-4840-8878-62119b6ff4a6" (UID: "50747a7c-9e5f-4840-8878-62119b6ff4a6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.237879 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vf9b\" (UniqueName: \"kubernetes.io/projected/4d2c4883-477f-42e0-923c-48053735598f-kube-api-access-9vf9b\") pod \"watcher-db-create-l5r5d\" (UID: \"4d2c4883-477f-42e0-923c-48053735598f\") " pod="openstack/watcher-db-create-l5r5d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.238198 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d2c4883-477f-42e0-923c-48053735598f-operator-scripts\") pod \"watcher-db-create-l5r5d\" (UID: \"4d2c4883-477f-42e0-923c-48053735598f\") " pod="openstack/watcher-db-create-l5r5d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.238451 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50747a7c-9e5f-4840-8878-62119b6ff4a6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.336433 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-4d74-account-create-update-mq22d"] Feb 16 10:05:46 crc kubenswrapper[4814]: E0216 10:05:46.336984 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89153bb-4a9e-419a-b142-b339a0797d78" containerName="swift-ring-rebalance" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.337002 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89153bb-4a9e-419a-b142-b339a0797d78" containerName="swift-ring-rebalance" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.337214 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89153bb-4a9e-419a-b142-b339a0797d78" containerName="swift-ring-rebalance" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.337976 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-4d74-account-create-update-mq22d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.339289 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-swiftconf\") pod \"f89153bb-4a9e-419a-b142-b339a0797d78\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.339333 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-ring-data-devices\") pod \"f89153bb-4a9e-419a-b142-b339a0797d78\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.339372 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-combined-ca-bundle\") pod \"f89153bb-4a9e-419a-b142-b339a0797d78\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.339482 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdl56\" (UniqueName: \"kubernetes.io/projected/f89153bb-4a9e-419a-b142-b339a0797d78-kube-api-access-cdl56\") pod \"f89153bb-4a9e-419a-b142-b339a0797d78\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.339566 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f89153bb-4a9e-419a-b142-b339a0797d78-etc-swift\") pod \"f89153bb-4a9e-419a-b142-b339a0797d78\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.339601 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-dispersionconf\") pod \"f89153bb-4a9e-419a-b142-b339a0797d78\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.339672 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-scripts\") pod \"f89153bb-4a9e-419a-b142-b339a0797d78\" (UID: \"f89153bb-4a9e-419a-b142-b339a0797d78\") " Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.339979 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vf9b\" (UniqueName: \"kubernetes.io/projected/4d2c4883-477f-42e0-923c-48053735598f-kube-api-access-9vf9b\") pod \"watcher-db-create-l5r5d\" (UID: \"4d2c4883-477f-42e0-923c-48053735598f\") " pod="openstack/watcher-db-create-l5r5d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.340017 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d2c4883-477f-42e0-923c-48053735598f-operator-scripts\") pod \"watcher-db-create-l5r5d\" (UID: \"4d2c4883-477f-42e0-923c-48053735598f\") " pod="openstack/watcher-db-create-l5r5d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.340226 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f89153bb-4a9e-419a-b142-b339a0797d78" (UID: "f89153bb-4a9e-419a-b142-b339a0797d78"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.341178 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d2c4883-477f-42e0-923c-48053735598f-operator-scripts\") pod \"watcher-db-create-l5r5d\" (UID: \"4d2c4883-477f-42e0-923c-48053735598f\") " pod="openstack/watcher-db-create-l5r5d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.341543 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f89153bb-4a9e-419a-b142-b339a0797d78-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f89153bb-4a9e-419a-b142-b339a0797d78" (UID: "f89153bb-4a9e-419a-b142-b339a0797d78"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.346468 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.353211 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-4d74-account-create-update-mq22d"] Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.358102 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89153bb-4a9e-419a-b142-b339a0797d78-kube-api-access-cdl56" (OuterVolumeSpecName: "kube-api-access-cdl56") pod "f89153bb-4a9e-419a-b142-b339a0797d78" (UID: "f89153bb-4a9e-419a-b142-b339a0797d78"). InnerVolumeSpecName "kube-api-access-cdl56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.370861 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f89153bb-4a9e-419a-b142-b339a0797d78" (UID: "f89153bb-4a9e-419a-b142-b339a0797d78"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:05:46 crc kubenswrapper[4814]: W0216 10:05:46.374662 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd69bc477_7bb4_4eb7_9598_119036f38586.slice/crio-1a2fe292615f5cb251a3f372a91155556009f899efcf505ba57c83b02399f01b WatchSource:0}: Error finding container 1a2fe292615f5cb251a3f372a91155556009f899efcf505ba57c83b02399f01b: Status 404 returned error can't find the container with id 1a2fe292615f5cb251a3f372a91155556009f899efcf505ba57c83b02399f01b Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.375838 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vf9b\" (UniqueName: \"kubernetes.io/projected/4d2c4883-477f-42e0-923c-48053735598f-kube-api-access-9vf9b\") pod \"watcher-db-create-l5r5d\" (UID: \"4d2c4883-477f-42e0-923c-48053735598f\") " pod="openstack/watcher-db-create-l5r5d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.386509 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f89153bb-4a9e-419a-b142-b339a0797d78" (UID: "f89153bb-4a9e-419a-b142-b339a0797d78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.390793 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c82c-account-create-update-56mbg"] Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.391053 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-scripts" (OuterVolumeSpecName: "scripts") pod "f89153bb-4a9e-419a-b142-b339a0797d78" (UID: "f89153bb-4a9e-419a-b142-b339a0797d78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.413914 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f89153bb-4a9e-419a-b142-b339a0797d78" (UID: "f89153bb-4a9e-419a-b142-b339a0797d78"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.427484 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2vmt7"] Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.443003 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/066d5f2f-7797-41bc-850f-c4639db01b54-operator-scripts\") pod \"watcher-4d74-account-create-update-mq22d\" (UID: \"066d5f2f-7797-41bc-850f-c4639db01b54\") " pod="openstack/watcher-4d74-account-create-update-mq22d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.443219 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8n9g\" (UniqueName: \"kubernetes.io/projected/066d5f2f-7797-41bc-850f-c4639db01b54-kube-api-access-s8n9g\") pod \"watcher-4d74-account-create-update-mq22d\" (UID: \"066d5f2f-7797-41bc-850f-c4639db01b54\") " pod="openstack/watcher-4d74-account-create-update-mq22d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.443280 4814 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.443292 4814 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.443304 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.443316 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdl56\" (UniqueName: \"kubernetes.io/projected/f89153bb-4a9e-419a-b142-b339a0797d78-kube-api-access-cdl56\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.443326 4814 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f89153bb-4a9e-419a-b142-b339a0797d78-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.443335 4814 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f89153bb-4a9e-419a-b142-b339a0797d78-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.443345 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f89153bb-4a9e-419a-b142-b339a0797d78-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.445436 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-l5r5d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.545048 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8n9g\" (UniqueName: \"kubernetes.io/projected/066d5f2f-7797-41bc-850f-c4639db01b54-kube-api-access-s8n9g\") pod \"watcher-4d74-account-create-update-mq22d\" (UID: \"066d5f2f-7797-41bc-850f-c4639db01b54\") " pod="openstack/watcher-4d74-account-create-update-mq22d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.545161 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/066d5f2f-7797-41bc-850f-c4639db01b54-operator-scripts\") pod \"watcher-4d74-account-create-update-mq22d\" (UID: \"066d5f2f-7797-41bc-850f-c4639db01b54\") " pod="openstack/watcher-4d74-account-create-update-mq22d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.546133 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/066d5f2f-7797-41bc-850f-c4639db01b54-operator-scripts\") pod \"watcher-4d74-account-create-update-mq22d\" (UID: \"066d5f2f-7797-41bc-850f-c4639db01b54\") " pod="openstack/watcher-4d74-account-create-update-mq22d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.554447 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554cc8c86f-t5x22"] Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.562703 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554cc8c86f-t5x22"] Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.575496 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8n9g\" (UniqueName: \"kubernetes.io/projected/066d5f2f-7797-41bc-850f-c4639db01b54-kube-api-access-s8n9g\") pod \"watcher-4d74-account-create-update-mq22d\" (UID: \"066d5f2f-7797-41bc-850f-c4639db01b54\") " pod="openstack/watcher-4d74-account-create-update-mq22d" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.611412 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2vmt7" event={"ID":"57490ed3-3fac-4ecb-84b5-1017a06e0ca9","Type":"ContainerStarted","Data":"d6abe8e6b657abf15a02801d733f05cc5761ee1aa2db068cc7c4aec95d2595a5"} Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.615455 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9320085e-0598-4822-aa1d-5b2f9469f573","Type":"ContainerStarted","Data":"c1efa8e6033f67f0eccc7a1db7c17256aac48945f0774924100251653d0e2d30"} Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.618445 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-68zpk" event={"ID":"f89153bb-4a9e-419a-b142-b339a0797d78","Type":"ContainerDied","Data":"f3d37ad1af9fd5898b94efacee7376f824ec42d56c5f292d57e792db582620d2"} Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.618612 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3d37ad1af9fd5898b94efacee7376f824ec42d56c5f292d57e792db582620d2" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.618521 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-68zpk" Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.621846 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c82c-account-create-update-56mbg" event={"ID":"d69bc477-7bb4-4eb7-9598-119036f38586","Type":"ContainerStarted","Data":"1a2fe292615f5cb251a3f372a91155556009f899efcf505ba57c83b02399f01b"} Feb 16 10:05:46 crc kubenswrapper[4814]: I0216 10:05:46.677385 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-4d74-account-create-update-mq22d" Feb 16 10:05:47 crc kubenswrapper[4814]: I0216 10:05:47.010370 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50747a7c-9e5f-4840-8878-62119b6ff4a6" path="/var/lib/kubelet/pods/50747a7c-9e5f-4840-8878-62119b6ff4a6/volumes" Feb 16 10:05:47 crc kubenswrapper[4814]: I0216 10:05:47.224932 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-4d74-account-create-update-mq22d"] Feb 16 10:05:47 crc kubenswrapper[4814]: W0216 10:05:47.271142 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod066d5f2f_7797_41bc_850f_c4639db01b54.slice/crio-c1eeb56cf39d76f2d58b2d4e8d44d76862d27511c9cbae8f7dd230a3a6b36dc7 WatchSource:0}: Error finding container c1eeb56cf39d76f2d58b2d4e8d44d76862d27511c9cbae8f7dd230a3a6b36dc7: Status 404 returned error can't find the container with id c1eeb56cf39d76f2d58b2d4e8d44d76862d27511c9cbae8f7dd230a3a6b36dc7 Feb 16 10:05:47 crc kubenswrapper[4814]: I0216 10:05:47.324576 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-l5r5d"] Feb 16 10:05:47 crc kubenswrapper[4814]: W0216 10:05:47.341894 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d2c4883_477f_42e0_923c_48053735598f.slice/crio-bd02cb2f0396cdebe236870d6a50cc058dfdde9391148d2184052a282ebc1d33 WatchSource:0}: Error finding container bd02cb2f0396cdebe236870d6a50cc058dfdde9391148d2184052a282ebc1d33: Status 404 returned error can't find the container with id bd02cb2f0396cdebe236870d6a50cc058dfdde9391148d2184052a282ebc1d33 Feb 16 10:05:47 crc kubenswrapper[4814]: I0216 10:05:47.635498 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-l5r5d" event={"ID":"4d2c4883-477f-42e0-923c-48053735598f","Type":"ContainerStarted","Data":"bd02cb2f0396cdebe236870d6a50cc058dfdde9391148d2184052a282ebc1d33"} Feb 16 10:05:47 crc kubenswrapper[4814]: I0216 10:05:47.638087 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-4d74-account-create-update-mq22d" event={"ID":"066d5f2f-7797-41bc-850f-c4639db01b54","Type":"ContainerStarted","Data":"c1eeb56cf39d76f2d58b2d4e8d44d76862d27511c9cbae8f7dd230a3a6b36dc7"} Feb 16 10:05:47 crc kubenswrapper[4814]: I0216 10:05:47.640305 4814 generic.go:334] "Generic (PLEG): container finished" podID="d69bc477-7bb4-4eb7-9598-119036f38586" containerID="09fbf61ed9652c4a68026d9446999e4cc6ccd2c939a823c03b735fd6e8111c5c" exitCode=0 Feb 16 10:05:47 crc kubenswrapper[4814]: I0216 10:05:47.640378 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c82c-account-create-update-56mbg" event={"ID":"d69bc477-7bb4-4eb7-9598-119036f38586","Type":"ContainerDied","Data":"09fbf61ed9652c4a68026d9446999e4cc6ccd2c939a823c03b735fd6e8111c5c"} Feb 16 10:05:47 crc kubenswrapper[4814]: I0216 10:05:47.644367 4814 generic.go:334] "Generic (PLEG): container finished" podID="57490ed3-3fac-4ecb-84b5-1017a06e0ca9" containerID="11f71749571f2988441878d39fe87babb7981192213483e497c2e55d796959e8" exitCode=0 Feb 16 10:05:47 crc kubenswrapper[4814]: I0216 10:05:47.644429 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2vmt7" event={"ID":"57490ed3-3fac-4ecb-84b5-1017a06e0ca9","Type":"ContainerDied","Data":"11f71749571f2988441878d39fe87babb7981192213483e497c2e55d796959e8"} Feb 16 10:05:48 crc kubenswrapper[4814]: I0216 10:05:48.687678 4814 generic.go:334] "Generic (PLEG): container finished" podID="4d2c4883-477f-42e0-923c-48053735598f" containerID="00a3ca713b49e466cf756734fabf15ceed721f487084f4c90ff73e6f09882873" exitCode=0 Feb 16 10:05:48 crc kubenswrapper[4814]: I0216 10:05:48.688388 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-l5r5d" event={"ID":"4d2c4883-477f-42e0-923c-48053735598f","Type":"ContainerDied","Data":"00a3ca713b49e466cf756734fabf15ceed721f487084f4c90ff73e6f09882873"} Feb 16 10:05:48 crc kubenswrapper[4814]: I0216 10:05:48.707179 4814 generic.go:334] "Generic (PLEG): container finished" podID="066d5f2f-7797-41bc-850f-c4639db01b54" containerID="0e536721c4beed0a71f0aeac25cdd776d60da37507472120bc862bae521e5507" exitCode=0 Feb 16 10:05:48 crc kubenswrapper[4814]: I0216 10:05:48.707482 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-4d74-account-create-update-mq22d" event={"ID":"066d5f2f-7797-41bc-850f-c4639db01b54","Type":"ContainerDied","Data":"0e536721c4beed0a71f0aeac25cdd776d60da37507472120bc862bae521e5507"} Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.341751 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vmt7" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.352364 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c82c-account-create-update-56mbg" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.407828 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hc8m\" (UniqueName: \"kubernetes.io/projected/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-kube-api-access-7hc8m\") pod \"57490ed3-3fac-4ecb-84b5-1017a06e0ca9\" (UID: \"57490ed3-3fac-4ecb-84b5-1017a06e0ca9\") " Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.408004 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzcdw\" (UniqueName: \"kubernetes.io/projected/d69bc477-7bb4-4eb7-9598-119036f38586-kube-api-access-vzcdw\") pod \"d69bc477-7bb4-4eb7-9598-119036f38586\" (UID: \"d69bc477-7bb4-4eb7-9598-119036f38586\") " Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.408130 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-operator-scripts\") pod \"57490ed3-3fac-4ecb-84b5-1017a06e0ca9\" (UID: \"57490ed3-3fac-4ecb-84b5-1017a06e0ca9\") " Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.408186 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69bc477-7bb4-4eb7-9598-119036f38586-operator-scripts\") pod \"d69bc477-7bb4-4eb7-9598-119036f38586\" (UID: \"d69bc477-7bb4-4eb7-9598-119036f38586\") " Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.408951 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57490ed3-3fac-4ecb-84b5-1017a06e0ca9" (UID: "57490ed3-3fac-4ecb-84b5-1017a06e0ca9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.409076 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d69bc477-7bb4-4eb7-9598-119036f38586-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d69bc477-7bb4-4eb7-9598-119036f38586" (UID: "d69bc477-7bb4-4eb7-9598-119036f38586"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.415933 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-kube-api-access-7hc8m" (OuterVolumeSpecName: "kube-api-access-7hc8m") pod "57490ed3-3fac-4ecb-84b5-1017a06e0ca9" (UID: "57490ed3-3fac-4ecb-84b5-1017a06e0ca9"). InnerVolumeSpecName "kube-api-access-7hc8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.431515 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d69bc477-7bb4-4eb7-9598-119036f38586-kube-api-access-vzcdw" (OuterVolumeSpecName: "kube-api-access-vzcdw") pod "d69bc477-7bb4-4eb7-9598-119036f38586" (UID: "d69bc477-7bb4-4eb7-9598-119036f38586"). InnerVolumeSpecName "kube-api-access-vzcdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.510409 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hc8m\" (UniqueName: \"kubernetes.io/projected/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-kube-api-access-7hc8m\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.510473 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzcdw\" (UniqueName: \"kubernetes.io/projected/d69bc477-7bb4-4eb7-9598-119036f38586-kube-api-access-vzcdw\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.510484 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57490ed3-3fac-4ecb-84b5-1017a06e0ca9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.510497 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d69bc477-7bb4-4eb7-9598-119036f38586-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.720204 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9320085e-0598-4822-aa1d-5b2f9469f573","Type":"ContainerStarted","Data":"8e1709bbe8837ab504fa2a3897057bdf723da315534460f4205aeeddfe80de75"} Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.723990 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c82c-account-create-update-56mbg" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.724013 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c82c-account-create-update-56mbg" event={"ID":"d69bc477-7bb4-4eb7-9598-119036f38586","Type":"ContainerDied","Data":"1a2fe292615f5cb251a3f372a91155556009f899efcf505ba57c83b02399f01b"} Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.724865 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2fe292615f5cb251a3f372a91155556009f899efcf505ba57c83b02399f01b" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.728141 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vmt7" Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.728153 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2vmt7" event={"ID":"57490ed3-3fac-4ecb-84b5-1017a06e0ca9","Type":"ContainerDied","Data":"d6abe8e6b657abf15a02801d733f05cc5761ee1aa2db068cc7c4aec95d2595a5"} Feb 16 10:05:49 crc kubenswrapper[4814]: I0216 10:05:49.728254 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6abe8e6b657abf15a02801d733f05cc5761ee1aa2db068cc7c4aec95d2595a5" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.003223 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-554cc8c86f-t5x22" podUID="50747a7c-9e5f-4840-8878-62119b6ff4a6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: i/o timeout" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.017748 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dc2nv" podUID="7de6150f-ee9f-437c-8813-4255d2533e45" containerName="ovn-controller" probeResult="failure" output=< Feb 16 10:05:50 crc kubenswrapper[4814]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 10:05:50 crc kubenswrapper[4814]: > Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.060154 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-4d74-account-create-update-mq22d" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.124498 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/066d5f2f-7797-41bc-850f-c4639db01b54-operator-scripts\") pod \"066d5f2f-7797-41bc-850f-c4639db01b54\" (UID: \"066d5f2f-7797-41bc-850f-c4639db01b54\") " Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.125064 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8n9g\" (UniqueName: \"kubernetes.io/projected/066d5f2f-7797-41bc-850f-c4639db01b54-kube-api-access-s8n9g\") pod \"066d5f2f-7797-41bc-850f-c4639db01b54\" (UID: \"066d5f2f-7797-41bc-850f-c4639db01b54\") " Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.126135 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066d5f2f-7797-41bc-850f-c4639db01b54-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "066d5f2f-7797-41bc-850f-c4639db01b54" (UID: "066d5f2f-7797-41bc-850f-c4639db01b54"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.132782 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066d5f2f-7797-41bc-850f-c4639db01b54-kube-api-access-s8n9g" (OuterVolumeSpecName: "kube-api-access-s8n9g") pod "066d5f2f-7797-41bc-850f-c4639db01b54" (UID: "066d5f2f-7797-41bc-850f-c4639db01b54"). InnerVolumeSpecName "kube-api-access-s8n9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.134080 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-l5r5d" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.185556 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bw2zz"] Feb 16 10:05:50 crc kubenswrapper[4814]: E0216 10:05:50.186170 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066d5f2f-7797-41bc-850f-c4639db01b54" containerName="mariadb-account-create-update" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.186191 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="066d5f2f-7797-41bc-850f-c4639db01b54" containerName="mariadb-account-create-update" Feb 16 10:05:50 crc kubenswrapper[4814]: E0216 10:05:50.186207 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2c4883-477f-42e0-923c-48053735598f" containerName="mariadb-database-create" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.186214 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2c4883-477f-42e0-923c-48053735598f" containerName="mariadb-database-create" Feb 16 10:05:50 crc kubenswrapper[4814]: E0216 10:05:50.186223 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57490ed3-3fac-4ecb-84b5-1017a06e0ca9" containerName="mariadb-database-create" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.186231 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="57490ed3-3fac-4ecb-84b5-1017a06e0ca9" containerName="mariadb-database-create" Feb 16 10:05:50 crc kubenswrapper[4814]: E0216 10:05:50.186254 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d69bc477-7bb4-4eb7-9598-119036f38586" containerName="mariadb-account-create-update" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.186263 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d69bc477-7bb4-4eb7-9598-119036f38586" containerName="mariadb-account-create-update" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.186423 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="57490ed3-3fac-4ecb-84b5-1017a06e0ca9" containerName="mariadb-database-create" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.186436 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="066d5f2f-7797-41bc-850f-c4639db01b54" containerName="mariadb-account-create-update" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.186445 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d69bc477-7bb4-4eb7-9598-119036f38586" containerName="mariadb-account-create-update" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.186455 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2c4883-477f-42e0-923c-48053735598f" containerName="mariadb-database-create" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.188436 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bw2zz" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.194655 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.211156 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bw2zz"] Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.227687 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vf9b\" (UniqueName: \"kubernetes.io/projected/4d2c4883-477f-42e0-923c-48053735598f-kube-api-access-9vf9b\") pod \"4d2c4883-477f-42e0-923c-48053735598f\" (UID: \"4d2c4883-477f-42e0-923c-48053735598f\") " Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.228025 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d2c4883-477f-42e0-923c-48053735598f-operator-scripts\") pod \"4d2c4883-477f-42e0-923c-48053735598f\" (UID: \"4d2c4883-477f-42e0-923c-48053735598f\") " Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.228512 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ef03ddc-442a-470c-b36b-d75b95e050d4-operator-scripts\") pod \"root-account-create-update-bw2zz\" (UID: \"9ef03ddc-442a-470c-b36b-d75b95e050d4\") " pod="openstack/root-account-create-update-bw2zz" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.228639 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dcth\" (UniqueName: \"kubernetes.io/projected/9ef03ddc-442a-470c-b36b-d75b95e050d4-kube-api-access-4dcth\") pod \"root-account-create-update-bw2zz\" (UID: \"9ef03ddc-442a-470c-b36b-d75b95e050d4\") " pod="openstack/root-account-create-update-bw2zz" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.228967 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8n9g\" (UniqueName: \"kubernetes.io/projected/066d5f2f-7797-41bc-850f-c4639db01b54-kube-api-access-s8n9g\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.229365 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/066d5f2f-7797-41bc-850f-c4639db01b54-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.230973 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2c4883-477f-42e0-923c-48053735598f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d2c4883-477f-42e0-923c-48053735598f" (UID: "4d2c4883-477f-42e0-923c-48053735598f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.234154 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2c4883-477f-42e0-923c-48053735598f-kube-api-access-9vf9b" (OuterVolumeSpecName: "kube-api-access-9vf9b") pod "4d2c4883-477f-42e0-923c-48053735598f" (UID: "4d2c4883-477f-42e0-923c-48053735598f"). InnerVolumeSpecName "kube-api-access-9vf9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.331126 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dcth\" (UniqueName: \"kubernetes.io/projected/9ef03ddc-442a-470c-b36b-d75b95e050d4-kube-api-access-4dcth\") pod \"root-account-create-update-bw2zz\" (UID: \"9ef03ddc-442a-470c-b36b-d75b95e050d4\") " pod="openstack/root-account-create-update-bw2zz" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.332013 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ef03ddc-442a-470c-b36b-d75b95e050d4-operator-scripts\") pod \"root-account-create-update-bw2zz\" (UID: \"9ef03ddc-442a-470c-b36b-d75b95e050d4\") " pod="openstack/root-account-create-update-bw2zz" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.332818 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vf9b\" (UniqueName: \"kubernetes.io/projected/4d2c4883-477f-42e0-923c-48053735598f-kube-api-access-9vf9b\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.333109 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d2c4883-477f-42e0-923c-48053735598f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.332979 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ef03ddc-442a-470c-b36b-d75b95e050d4-operator-scripts\") pod \"root-account-create-update-bw2zz\" (UID: \"9ef03ddc-442a-470c-b36b-d75b95e050d4\") " pod="openstack/root-account-create-update-bw2zz" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.349364 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dcth\" (UniqueName: \"kubernetes.io/projected/9ef03ddc-442a-470c-b36b-d75b95e050d4-kube-api-access-4dcth\") pod \"root-account-create-update-bw2zz\" (UID: \"9ef03ddc-442a-470c-b36b-d75b95e050d4\") " pod="openstack/root-account-create-update-bw2zz" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.589342 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bw2zz" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.751251 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-l5r5d" event={"ID":"4d2c4883-477f-42e0-923c-48053735598f","Type":"ContainerDied","Data":"bd02cb2f0396cdebe236870d6a50cc058dfdde9391148d2184052a282ebc1d33"} Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.751301 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd02cb2f0396cdebe236870d6a50cc058dfdde9391148d2184052a282ebc1d33" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.751378 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-l5r5d" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.754187 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-4d74-account-create-update-mq22d" event={"ID":"066d5f2f-7797-41bc-850f-c4639db01b54","Type":"ContainerDied","Data":"c1eeb56cf39d76f2d58b2d4e8d44d76862d27511c9cbae8f7dd230a3a6b36dc7"} Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.754241 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1eeb56cf39d76f2d58b2d4e8d44d76862d27511c9cbae8f7dd230a3a6b36dc7" Feb 16 10:05:50 crc kubenswrapper[4814]: I0216 10:05:50.754252 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-4d74-account-create-update-mq22d" Feb 16 10:05:51 crc kubenswrapper[4814]: I0216 10:05:51.080877 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bw2zz"] Feb 16 10:05:52 crc kubenswrapper[4814]: W0216 10:05:52.057098 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ef03ddc_442a_470c_b36b_d75b95e050d4.slice/crio-153915ef688122097771c55b1e4cf1ea21c305385142e7d148742b33935addaf WatchSource:0}: Error finding container 153915ef688122097771c55b1e4cf1ea21c305385142e7d148742b33935addaf: Status 404 returned error can't find the container with id 153915ef688122097771c55b1e4cf1ea21c305385142e7d148742b33935addaf Feb 16 10:05:52 crc kubenswrapper[4814]: I0216 10:05:52.794210 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9320085e-0598-4822-aa1d-5b2f9469f573","Type":"ContainerStarted","Data":"2c46ab5dcfa13b2c38db786abffab6e62cd2af9558795c0ee42ae18e4fb8056f"} Feb 16 10:05:52 crc kubenswrapper[4814]: I0216 10:05:52.842054 4814 generic.go:334] "Generic (PLEG): container finished" podID="b4e759af-f091-47c0-accc-c68b45b277fa" containerID="1846349bf7b7f4f56f152afa89867acf1d900891cbd2de6857821f736a54caf8" exitCode=0 Feb 16 10:05:52 crc kubenswrapper[4814]: I0216 10:05:52.842796 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b4e759af-f091-47c0-accc-c68b45b277fa","Type":"ContainerDied","Data":"1846349bf7b7f4f56f152afa89867acf1d900891cbd2de6857821f736a54caf8"} Feb 16 10:05:52 crc kubenswrapper[4814]: I0216 10:05:52.871281 4814 generic.go:334] "Generic (PLEG): container finished" podID="9ef03ddc-442a-470c-b36b-d75b95e050d4" containerID="03dfa1ca386eaa421b810cf699533e26e54ab704dee95d4ca344aa6802a560fe" exitCode=0 Feb 16 10:05:52 crc kubenswrapper[4814]: I0216 10:05:52.871409 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bw2zz" event={"ID":"9ef03ddc-442a-470c-b36b-d75b95e050d4","Type":"ContainerDied","Data":"03dfa1ca386eaa421b810cf699533e26e54ab704dee95d4ca344aa6802a560fe"} Feb 16 10:05:52 crc kubenswrapper[4814]: I0216 10:05:52.871451 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bw2zz" event={"ID":"9ef03ddc-442a-470c-b36b-d75b95e050d4","Type":"ContainerStarted","Data":"153915ef688122097771c55b1e4cf1ea21c305385142e7d148742b33935addaf"} Feb 16 10:05:52 crc kubenswrapper[4814]: I0216 10:05:52.876406 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=36.044374853 podStartE2EDuration="1m16.876381785s" podCreationTimestamp="2026-02-16 10:04:36 +0000 UTC" firstStartedPulling="2026-02-16 10:05:11.325248489 +0000 UTC m=+1169.018404669" lastFinishedPulling="2026-02-16 10:05:52.157255421 +0000 UTC m=+1209.850411601" observedRunningTime="2026-02-16 10:05:52.872123898 +0000 UTC m=+1210.565280098" watchObservedRunningTime="2026-02-16 10:05:52.876381785 +0000 UTC m=+1210.569537965" Feb 16 10:05:52 crc kubenswrapper[4814]: I0216 10:05:52.887833 4814 generic.go:334] "Generic (PLEG): container finished" podID="6a0b4bfb-2144-4fd9-be15-07396c44a11c" containerID="081312a7b74d0b18ddb6ccf7b84fb9a8efe3dbf13e3158633b3976c9be4a3e20" exitCode=0 Feb 16 10:05:52 crc kubenswrapper[4814]: I0216 10:05:52.888246 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"6a0b4bfb-2144-4fd9-be15-07396c44a11c","Type":"ContainerDied","Data":"081312a7b74d0b18ddb6ccf7b84fb9a8efe3dbf13e3158633b3976c9be4a3e20"} Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.378773 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rqskr"] Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.379926 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rqskr" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.393293 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rqskr"] Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.490587 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e11f-account-create-update-pl8x6"] Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.493240 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e11f-account-create-update-pl8x6" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.496198 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.505310 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e11f-account-create-update-pl8x6"] Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.506338 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36422e8-334d-414d-8d3f-b5a66ce72da2-operator-scripts\") pod \"glance-db-create-rqskr\" (UID: \"d36422e8-334d-414d-8d3f-b5a66ce72da2\") " pod="openstack/glance-db-create-rqskr" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.506386 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p7qm\" (UniqueName: \"kubernetes.io/projected/d36422e8-334d-414d-8d3f-b5a66ce72da2-kube-api-access-2p7qm\") pod \"glance-db-create-rqskr\" (UID: \"d36422e8-334d-414d-8d3f-b5a66ce72da2\") " pod="openstack/glance-db-create-rqskr" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.609205 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n2q9\" (UniqueName: \"kubernetes.io/projected/f4cf8b58-cd5c-46a9-9513-89178d899f14-kube-api-access-7n2q9\") pod \"glance-e11f-account-create-update-pl8x6\" (UID: \"f4cf8b58-cd5c-46a9-9513-89178d899f14\") " pod="openstack/glance-e11f-account-create-update-pl8x6" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.609316 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36422e8-334d-414d-8d3f-b5a66ce72da2-operator-scripts\") pod \"glance-db-create-rqskr\" (UID: \"d36422e8-334d-414d-8d3f-b5a66ce72da2\") " pod="openstack/glance-db-create-rqskr" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.609356 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p7qm\" (UniqueName: \"kubernetes.io/projected/d36422e8-334d-414d-8d3f-b5a66ce72da2-kube-api-access-2p7qm\") pod \"glance-db-create-rqskr\" (UID: \"d36422e8-334d-414d-8d3f-b5a66ce72da2\") " pod="openstack/glance-db-create-rqskr" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.609446 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4cf8b58-cd5c-46a9-9513-89178d899f14-operator-scripts\") pod \"glance-e11f-account-create-update-pl8x6\" (UID: \"f4cf8b58-cd5c-46a9-9513-89178d899f14\") " pod="openstack/glance-e11f-account-create-update-pl8x6" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.610521 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36422e8-334d-414d-8d3f-b5a66ce72da2-operator-scripts\") pod \"glance-db-create-rqskr\" (UID: \"d36422e8-334d-414d-8d3f-b5a66ce72da2\") " pod="openstack/glance-db-create-rqskr" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.633863 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p7qm\" (UniqueName: \"kubernetes.io/projected/d36422e8-334d-414d-8d3f-b5a66ce72da2-kube-api-access-2p7qm\") pod \"glance-db-create-rqskr\" (UID: \"d36422e8-334d-414d-8d3f-b5a66ce72da2\") " pod="openstack/glance-db-create-rqskr" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.701715 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rqskr" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.711348 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4cf8b58-cd5c-46a9-9513-89178d899f14-operator-scripts\") pod \"glance-e11f-account-create-update-pl8x6\" (UID: \"f4cf8b58-cd5c-46a9-9513-89178d899f14\") " pod="openstack/glance-e11f-account-create-update-pl8x6" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.711478 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n2q9\" (UniqueName: \"kubernetes.io/projected/f4cf8b58-cd5c-46a9-9513-89178d899f14-kube-api-access-7n2q9\") pod \"glance-e11f-account-create-update-pl8x6\" (UID: \"f4cf8b58-cd5c-46a9-9513-89178d899f14\") " pod="openstack/glance-e11f-account-create-update-pl8x6" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.712239 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4cf8b58-cd5c-46a9-9513-89178d899f14-operator-scripts\") pod \"glance-e11f-account-create-update-pl8x6\" (UID: \"f4cf8b58-cd5c-46a9-9513-89178d899f14\") " pod="openstack/glance-e11f-account-create-update-pl8x6" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.741157 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n2q9\" (UniqueName: \"kubernetes.io/projected/f4cf8b58-cd5c-46a9-9513-89178d899f14-kube-api-access-7n2q9\") pod \"glance-e11f-account-create-update-pl8x6\" (UID: \"f4cf8b58-cd5c-46a9-9513-89178d899f14\") " pod="openstack/glance-e11f-account-create-update-pl8x6" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.810912 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e11f-account-create-update-pl8x6" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.916932 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b4e759af-f091-47c0-accc-c68b45b277fa","Type":"ContainerStarted","Data":"f02ab00466dd3941421627eea21c0c682ce053eaa587dbca6c1e15831167cf4d"} Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.917848 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.931485 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"6a0b4bfb-2144-4fd9-be15-07396c44a11c","Type":"ContainerStarted","Data":"50a2d3fc65414f2c592547d7ed50b5def9a675071ea31807dec1cc6600e3f232"} Feb 16 10:05:53 crc kubenswrapper[4814]: I0216 10:05:53.932207 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.021913 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371950.832891 podStartE2EDuration="1m26.021884213s" podCreationTimestamp="2026-02-16 10:04:28 +0000 UTC" firstStartedPulling="2026-02-16 10:04:31.337341648 +0000 UTC m=+1129.030497828" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:05:53.964974011 +0000 UTC m=+1211.658130191" watchObservedRunningTime="2026-02-16 10:05:54.021884213 +0000 UTC m=+1211.715040393" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.025226 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=39.680880494 podStartE2EDuration="1m26.025211905s" podCreationTimestamp="2026-02-16 10:04:28 +0000 UTC" firstStartedPulling="2026-02-16 10:04:31.267563652 +0000 UTC m=+1128.960719832" lastFinishedPulling="2026-02-16 10:05:17.611895073 +0000 UTC m=+1175.305051243" observedRunningTime="2026-02-16 10:05:54.016401192 +0000 UTC m=+1211.709557402" watchObservedRunningTime="2026-02-16 10:05:54.025211905 +0000 UTC m=+1211.718368085" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.196211 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rqskr"] Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.247884 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6dxk8"] Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.249450 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6dxk8" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.290732 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6dxk8"] Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.328445 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08262c5b-0d62-4a80-9b03-76fc4d2297f3-operator-scripts\") pod \"keystone-db-create-6dxk8\" (UID: \"08262c5b-0d62-4a80-9b03-76fc4d2297f3\") " pod="openstack/keystone-db-create-6dxk8" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.328589 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8plpv\" (UniqueName: \"kubernetes.io/projected/08262c5b-0d62-4a80-9b03-76fc4d2297f3-kube-api-access-8plpv\") pod \"keystone-db-create-6dxk8\" (UID: \"08262c5b-0d62-4a80-9b03-76fc4d2297f3\") " pod="openstack/keystone-db-create-6dxk8" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.385696 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d31f-account-create-update-jgnvs"] Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.387315 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d31f-account-create-update-jgnvs" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.393649 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.401605 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d31f-account-create-update-jgnvs"] Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.431414 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8plpv\" (UniqueName: \"kubernetes.io/projected/08262c5b-0d62-4a80-9b03-76fc4d2297f3-kube-api-access-8plpv\") pod \"keystone-db-create-6dxk8\" (UID: \"08262c5b-0d62-4a80-9b03-76fc4d2297f3\") " pod="openstack/keystone-db-create-6dxk8" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.431517 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp7pc\" (UniqueName: \"kubernetes.io/projected/15e41c90-a220-49bf-ac62-9653ee282da0-kube-api-access-tp7pc\") pod \"keystone-d31f-account-create-update-jgnvs\" (UID: \"15e41c90-a220-49bf-ac62-9653ee282da0\") " pod="openstack/keystone-d31f-account-create-update-jgnvs" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.431635 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e41c90-a220-49bf-ac62-9653ee282da0-operator-scripts\") pod \"keystone-d31f-account-create-update-jgnvs\" (UID: \"15e41c90-a220-49bf-ac62-9653ee282da0\") " pod="openstack/keystone-d31f-account-create-update-jgnvs" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.431736 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08262c5b-0d62-4a80-9b03-76fc4d2297f3-operator-scripts\") pod \"keystone-db-create-6dxk8\" (UID: \"08262c5b-0d62-4a80-9b03-76fc4d2297f3\") " pod="openstack/keystone-db-create-6dxk8" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.432857 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08262c5b-0d62-4a80-9b03-76fc4d2297f3-operator-scripts\") pod \"keystone-db-create-6dxk8\" (UID: \"08262c5b-0d62-4a80-9b03-76fc4d2297f3\") " pod="openstack/keystone-db-create-6dxk8" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.495468 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e11f-account-create-update-pl8x6"] Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.512229 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8plpv\" (UniqueName: \"kubernetes.io/projected/08262c5b-0d62-4a80-9b03-76fc4d2297f3-kube-api-access-8plpv\") pod \"keystone-db-create-6dxk8\" (UID: \"08262c5b-0d62-4a80-9b03-76fc4d2297f3\") " pod="openstack/keystone-db-create-6dxk8" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.544846 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp7pc\" (UniqueName: \"kubernetes.io/projected/15e41c90-a220-49bf-ac62-9653ee282da0-kube-api-access-tp7pc\") pod \"keystone-d31f-account-create-update-jgnvs\" (UID: \"15e41c90-a220-49bf-ac62-9653ee282da0\") " pod="openstack/keystone-d31f-account-create-update-jgnvs" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.545010 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e41c90-a220-49bf-ac62-9653ee282da0-operator-scripts\") pod \"keystone-d31f-account-create-update-jgnvs\" (UID: \"15e41c90-a220-49bf-ac62-9653ee282da0\") " pod="openstack/keystone-d31f-account-create-update-jgnvs" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.557340 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e41c90-a220-49bf-ac62-9653ee282da0-operator-scripts\") pod \"keystone-d31f-account-create-update-jgnvs\" (UID: \"15e41c90-a220-49bf-ac62-9653ee282da0\") " pod="openstack/keystone-d31f-account-create-update-jgnvs" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.580252 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp7pc\" (UniqueName: \"kubernetes.io/projected/15e41c90-a220-49bf-ac62-9653ee282da0-kube-api-access-tp7pc\") pod \"keystone-d31f-account-create-update-jgnvs\" (UID: \"15e41c90-a220-49bf-ac62-9653ee282da0\") " pod="openstack/keystone-d31f-account-create-update-jgnvs" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.583074 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d31f-account-create-update-jgnvs" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.588635 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bw2zz" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.620175 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6dxk8" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.647167 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dcth\" (UniqueName: \"kubernetes.io/projected/9ef03ddc-442a-470c-b36b-d75b95e050d4-kube-api-access-4dcth\") pod \"9ef03ddc-442a-470c-b36b-d75b95e050d4\" (UID: \"9ef03ddc-442a-470c-b36b-d75b95e050d4\") " Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.648983 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ef03ddc-442a-470c-b36b-d75b95e050d4-operator-scripts\") pod \"9ef03ddc-442a-470c-b36b-d75b95e050d4\" (UID: \"9ef03ddc-442a-470c-b36b-d75b95e050d4\") " Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.652580 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef03ddc-442a-470c-b36b-d75b95e050d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ef03ddc-442a-470c-b36b-d75b95e050d4" (UID: "9ef03ddc-442a-470c-b36b-d75b95e050d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.658352 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef03ddc-442a-470c-b36b-d75b95e050d4-kube-api-access-4dcth" (OuterVolumeSpecName: "kube-api-access-4dcth") pod "9ef03ddc-442a-470c-b36b-d75b95e050d4" (UID: "9ef03ddc-442a-470c-b36b-d75b95e050d4"). InnerVolumeSpecName "kube-api-access-4dcth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.753698 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dcth\" (UniqueName: \"kubernetes.io/projected/9ef03ddc-442a-470c-b36b-d75b95e050d4-kube-api-access-4dcth\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.753744 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ef03ddc-442a-470c-b36b-d75b95e050d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.940989 4814 generic.go:334] "Generic (PLEG): container finished" podID="19661670-37f9-4577-93d4-cd87303f3008" containerID="366bd9fdc38de7d4c51396d0d63626f375267d4b7c2ed227d24d4be7f09654d0" exitCode=0 Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.941218 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19661670-37f9-4577-93d4-cd87303f3008","Type":"ContainerDied","Data":"366bd9fdc38de7d4c51396d0d63626f375267d4b7c2ed227d24d4be7f09654d0"} Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.944167 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rqskr" event={"ID":"d36422e8-334d-414d-8d3f-b5a66ce72da2","Type":"ContainerStarted","Data":"a5805c30106ebe0ea388418561dea2fd3732034017adbc838825c1a0c52863e1"} Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.944231 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rqskr" event={"ID":"d36422e8-334d-414d-8d3f-b5a66ce72da2","Type":"ContainerStarted","Data":"6c747d1225033627db23ed7456dd1c662342c03ed86bc2fe63e889c5efb472ce"} Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.945776 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bw2zz" event={"ID":"9ef03ddc-442a-470c-b36b-d75b95e050d4","Type":"ContainerDied","Data":"153915ef688122097771c55b1e4cf1ea21c305385142e7d148742b33935addaf"} Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.945811 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="153915ef688122097771c55b1e4cf1ea21c305385142e7d148742b33935addaf" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.945892 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bw2zz" Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.958727 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e11f-account-create-update-pl8x6" event={"ID":"f4cf8b58-cd5c-46a9-9513-89178d899f14","Type":"ContainerStarted","Data":"9f00a193ae84bd53dd92b270216bb64d06b8ed3d41272c938a8315b63ae9273e"} Feb 16 10:05:54 crc kubenswrapper[4814]: I0216 10:05:54.958781 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e11f-account-create-update-pl8x6" event={"ID":"f4cf8b58-cd5c-46a9-9513-89178d899f14","Type":"ContainerStarted","Data":"03ba48254e45f158e1e342486fa17c0f6ede99f498fd9e2d0f87c98b6a952fd1"} Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.009296 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-e11f-account-create-update-pl8x6" podStartSLOduration=2.009268142 podStartE2EDuration="2.009268142s" podCreationTimestamp="2026-02-16 10:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:05:55.005072376 +0000 UTC m=+1212.698228566" watchObservedRunningTime="2026-02-16 10:05:55.009268142 +0000 UTC m=+1212.702424322" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.035946 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-dc2nv" podUID="7de6150f-ee9f-437c-8813-4255d2533e45" containerName="ovn-controller" probeResult="failure" output=< Feb 16 10:05:55 crc kubenswrapper[4814]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 10:05:55 crc kubenswrapper[4814]: > Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.049714 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-rqskr" podStartSLOduration=2.049677418 podStartE2EDuration="2.049677418s" podCreationTimestamp="2026-02-16 10:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:05:55.047229831 +0000 UTC m=+1212.740386011" watchObservedRunningTime="2026-02-16 10:05:55.049677418 +0000 UTC m=+1212.742833608" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.085381 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.103412 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-v6xwq" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.206213 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d31f-account-create-update-jgnvs"] Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.275879 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6dxk8"] Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.418296 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dc2nv-config-cvjvb"] Feb 16 10:05:55 crc kubenswrapper[4814]: E0216 10:05:55.418796 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef03ddc-442a-470c-b36b-d75b95e050d4" containerName="mariadb-account-create-update" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.418813 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef03ddc-442a-470c-b36b-d75b95e050d4" containerName="mariadb-account-create-update" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.419003 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef03ddc-442a-470c-b36b-d75b95e050d4" containerName="mariadb-account-create-update" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.419685 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.423674 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.476637 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dc2nv-config-cvjvb"] Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.572851 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-log-ovn\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.572948 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmst7\" (UniqueName: \"kubernetes.io/projected/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-kube-api-access-zmst7\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.572990 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-additional-scripts\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.573037 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-scripts\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.573067 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run-ovn\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.573128 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.675323 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmst7\" (UniqueName: \"kubernetes.io/projected/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-kube-api-access-zmst7\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.675398 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-additional-scripts\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.675453 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-scripts\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.675480 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run-ovn\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.675696 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.675858 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-log-ovn\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.676272 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-log-ovn\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.676818 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.677150 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run-ovn\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.677978 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-additional-scripts\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.678953 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-scripts\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.707244 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmst7\" (UniqueName: \"kubernetes.io/projected/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-kube-api-access-zmst7\") pod \"ovn-controller-dc2nv-config-cvjvb\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.805815 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:55 crc kubenswrapper[4814]: I0216 10:05:55.997769 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19661670-37f9-4577-93d4-cd87303f3008","Type":"ContainerStarted","Data":"989119bf4cdc5383927490737694d386979a77300d294b197ba0d8dc93a64f34"} Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:55.998719 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.031858 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6dxk8" event={"ID":"08262c5b-0d62-4a80-9b03-76fc4d2297f3","Type":"ContainerStarted","Data":"09e7bff9ed19c6120a19fe7f800e884cb583cded12428427bef73bfe718eea04"} Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.031925 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6dxk8" event={"ID":"08262c5b-0d62-4a80-9b03-76fc4d2297f3","Type":"ContainerStarted","Data":"7a5312503ade5f57f9e01b9eab7899a0a03723e3039276fab4e79d77ec2714d7"} Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.040747 4814 generic.go:334] "Generic (PLEG): container finished" podID="d36422e8-334d-414d-8d3f-b5a66ce72da2" containerID="a5805c30106ebe0ea388418561dea2fd3732034017adbc838825c1a0c52863e1" exitCode=0 Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.040973 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rqskr" event={"ID":"d36422e8-334d-414d-8d3f-b5a66ce72da2","Type":"ContainerDied","Data":"a5805c30106ebe0ea388418561dea2fd3732034017adbc838825c1a0c52863e1"} Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.044000 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4cf8b58-cd5c-46a9-9513-89178d899f14" containerID="9f00a193ae84bd53dd92b270216bb64d06b8ed3d41272c938a8315b63ae9273e" exitCode=0 Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.044092 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e11f-account-create-update-pl8x6" event={"ID":"f4cf8b58-cd5c-46a9-9513-89178d899f14","Type":"ContainerDied","Data":"9f00a193ae84bd53dd92b270216bb64d06b8ed3d41272c938a8315b63ae9273e"} Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.054763 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371949.800041 podStartE2EDuration="1m27.054734975s" podCreationTimestamp="2026-02-16 10:04:29 +0000 UTC" firstStartedPulling="2026-02-16 10:04:31.917628119 +0000 UTC m=+1129.610784299" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:05:56.051199728 +0000 UTC m=+1213.744355908" watchObservedRunningTime="2026-02-16 10:05:56.054734975 +0000 UTC m=+1213.747891165" Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.055554 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d31f-account-create-update-jgnvs" event={"ID":"15e41c90-a220-49bf-ac62-9653ee282da0","Type":"ContainerStarted","Data":"706d2cdbdf57a6736c3a7e8da5f686610a1b33c4478b6341533b0fc98c5d1184"} Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.055601 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d31f-account-create-update-jgnvs" event={"ID":"15e41c90-a220-49bf-ac62-9653ee282da0","Type":"ContainerStarted","Data":"8b51eee156095caa516544179d00460b86e7c9324e2ee8fca9cd8386381c53e5"} Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.158939 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-6dxk8" podStartSLOduration=2.158911844 podStartE2EDuration="2.158911844s" podCreationTimestamp="2026-02-16 10:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:05:56.131650291 +0000 UTC m=+1213.824806471" watchObservedRunningTime="2026-02-16 10:05:56.158911844 +0000 UTC m=+1213.852068024" Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.217053 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d31f-account-create-update-jgnvs" podStartSLOduration=2.217024421 podStartE2EDuration="2.217024421s" podCreationTimestamp="2026-02-16 10:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:05:56.215916249 +0000 UTC m=+1213.909072429" watchObservedRunningTime="2026-02-16 10:05:56.217024421 +0000 UTC m=+1213.910180601" Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.392603 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dc2nv-config-cvjvb"] Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.596733 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bw2zz"] Feb 16 10:05:56 crc kubenswrapper[4814]: I0216 10:05:56.609794 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bw2zz"] Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.006169 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef03ddc-442a-470c-b36b-d75b95e050d4" path="/var/lib/kubelet/pods/9ef03ddc-442a-470c-b36b-d75b95e050d4/volumes" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.064451 4814 generic.go:334] "Generic (PLEG): container finished" podID="08262c5b-0d62-4a80-9b03-76fc4d2297f3" containerID="09e7bff9ed19c6120a19fe7f800e884cb583cded12428427bef73bfe718eea04" exitCode=0 Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.064550 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6dxk8" event={"ID":"08262c5b-0d62-4a80-9b03-76fc4d2297f3","Type":"ContainerDied","Data":"09e7bff9ed19c6120a19fe7f800e884cb583cded12428427bef73bfe718eea04"} Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.067928 4814 generic.go:334] "Generic (PLEG): container finished" podID="15e41c90-a220-49bf-ac62-9653ee282da0" containerID="706d2cdbdf57a6736c3a7e8da5f686610a1b33c4478b6341533b0fc98c5d1184" exitCode=0 Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.067986 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d31f-account-create-update-jgnvs" event={"ID":"15e41c90-a220-49bf-ac62-9653ee282da0","Type":"ContainerDied","Data":"706d2cdbdf57a6736c3a7e8da5f686610a1b33c4478b6341533b0fc98c5d1184"} Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.071085 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc2nv-config-cvjvb" event={"ID":"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a","Type":"ContainerStarted","Data":"fd838acb92ff96fcd44703bd387aa4dd5bf24f118cc44149dad8da43497034a3"} Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.071115 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc2nv-config-cvjvb" event={"ID":"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a","Type":"ContainerStarted","Data":"902816deb521a17526989b0d1915533a24a5a7f4e9d649b33308d195c581467a"} Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.122357 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dc2nv-config-cvjvb" podStartSLOduration=2.12232103 podStartE2EDuration="2.12232103s" podCreationTimestamp="2026-02-16 10:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:05:57.115688617 +0000 UTC m=+1214.808844807" watchObservedRunningTime="2026-02-16 10:05:57.12232103 +0000 UTC m=+1214.815477210" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.534224 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.584281 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e11f-account-create-update-pl8x6" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.592647 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rqskr" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.745553 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p7qm\" (UniqueName: \"kubernetes.io/projected/d36422e8-334d-414d-8d3f-b5a66ce72da2-kube-api-access-2p7qm\") pod \"d36422e8-334d-414d-8d3f-b5a66ce72da2\" (UID: \"d36422e8-334d-414d-8d3f-b5a66ce72da2\") " Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.745811 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4cf8b58-cd5c-46a9-9513-89178d899f14-operator-scripts\") pod \"f4cf8b58-cd5c-46a9-9513-89178d899f14\" (UID: \"f4cf8b58-cd5c-46a9-9513-89178d899f14\") " Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.745895 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n2q9\" (UniqueName: \"kubernetes.io/projected/f4cf8b58-cd5c-46a9-9513-89178d899f14-kube-api-access-7n2q9\") pod \"f4cf8b58-cd5c-46a9-9513-89178d899f14\" (UID: \"f4cf8b58-cd5c-46a9-9513-89178d899f14\") " Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.745984 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36422e8-334d-414d-8d3f-b5a66ce72da2-operator-scripts\") pod \"d36422e8-334d-414d-8d3f-b5a66ce72da2\" (UID: \"d36422e8-334d-414d-8d3f-b5a66ce72da2\") " Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.746778 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d36422e8-334d-414d-8d3f-b5a66ce72da2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d36422e8-334d-414d-8d3f-b5a66ce72da2" (UID: "d36422e8-334d-414d-8d3f-b5a66ce72da2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.747217 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4cf8b58-cd5c-46a9-9513-89178d899f14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4cf8b58-cd5c-46a9-9513-89178d899f14" (UID: "f4cf8b58-cd5c-46a9-9513-89178d899f14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.756018 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d36422e8-334d-414d-8d3f-b5a66ce72da2-kube-api-access-2p7qm" (OuterVolumeSpecName: "kube-api-access-2p7qm") pod "d36422e8-334d-414d-8d3f-b5a66ce72da2" (UID: "d36422e8-334d-414d-8d3f-b5a66ce72da2"). InnerVolumeSpecName "kube-api-access-2p7qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.771885 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4cf8b58-cd5c-46a9-9513-89178d899f14-kube-api-access-7n2q9" (OuterVolumeSpecName: "kube-api-access-7n2q9") pod "f4cf8b58-cd5c-46a9-9513-89178d899f14" (UID: "f4cf8b58-cd5c-46a9-9513-89178d899f14"). InnerVolumeSpecName "kube-api-access-7n2q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.849570 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d36422e8-334d-414d-8d3f-b5a66ce72da2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.849617 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p7qm\" (UniqueName: \"kubernetes.io/projected/d36422e8-334d-414d-8d3f-b5a66ce72da2-kube-api-access-2p7qm\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.849638 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4cf8b58-cd5c-46a9-9513-89178d899f14-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:57 crc kubenswrapper[4814]: I0216 10:05:57.849648 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n2q9\" (UniqueName: \"kubernetes.io/projected/f4cf8b58-cd5c-46a9-9513-89178d899f14-kube-api-access-7n2q9\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.084252 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rqskr" event={"ID":"d36422e8-334d-414d-8d3f-b5a66ce72da2","Type":"ContainerDied","Data":"6c747d1225033627db23ed7456dd1c662342c03ed86bc2fe63e889c5efb472ce"} Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.084310 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c747d1225033627db23ed7456dd1c662342c03ed86bc2fe63e889c5efb472ce" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.084400 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rqskr" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.095352 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e11f-account-create-update-pl8x6" event={"ID":"f4cf8b58-cd5c-46a9-9513-89178d899f14","Type":"ContainerDied","Data":"03ba48254e45f158e1e342486fa17c0f6ede99f498fd9e2d0f87c98b6a952fd1"} Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.095410 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03ba48254e45f158e1e342486fa17c0f6ede99f498fd9e2d0f87c98b6a952fd1" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.095488 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e11f-account-create-update-pl8x6" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.101092 4814 generic.go:334] "Generic (PLEG): container finished" podID="66920cc8-61c6-4ec3-ba55-3a4d2c98f14a" containerID="fd838acb92ff96fcd44703bd387aa4dd5bf24f118cc44149dad8da43497034a3" exitCode=0 Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.101730 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc2nv-config-cvjvb" event={"ID":"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a","Type":"ContainerDied","Data":"fd838acb92ff96fcd44703bd387aa4dd5bf24f118cc44149dad8da43497034a3"} Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.511067 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6dxk8" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.658809 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d31f-account-create-update-jgnvs" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.671721 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8plpv\" (UniqueName: \"kubernetes.io/projected/08262c5b-0d62-4a80-9b03-76fc4d2297f3-kube-api-access-8plpv\") pod \"08262c5b-0d62-4a80-9b03-76fc4d2297f3\" (UID: \"08262c5b-0d62-4a80-9b03-76fc4d2297f3\") " Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.671791 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08262c5b-0d62-4a80-9b03-76fc4d2297f3-operator-scripts\") pod \"08262c5b-0d62-4a80-9b03-76fc4d2297f3\" (UID: \"08262c5b-0d62-4a80-9b03-76fc4d2297f3\") " Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.673340 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08262c5b-0d62-4a80-9b03-76fc4d2297f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08262c5b-0d62-4a80-9b03-76fc4d2297f3" (UID: "08262c5b-0d62-4a80-9b03-76fc4d2297f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.680822 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08262c5b-0d62-4a80-9b03-76fc4d2297f3-kube-api-access-8plpv" (OuterVolumeSpecName: "kube-api-access-8plpv") pod "08262c5b-0d62-4a80-9b03-76fc4d2297f3" (UID: "08262c5b-0d62-4a80-9b03-76fc4d2297f3"). InnerVolumeSpecName "kube-api-access-8plpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.773659 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e41c90-a220-49bf-ac62-9653ee282da0-operator-scripts\") pod \"15e41c90-a220-49bf-ac62-9653ee282da0\" (UID: \"15e41c90-a220-49bf-ac62-9653ee282da0\") " Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.773835 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp7pc\" (UniqueName: \"kubernetes.io/projected/15e41c90-a220-49bf-ac62-9653ee282da0-kube-api-access-tp7pc\") pod \"15e41c90-a220-49bf-ac62-9653ee282da0\" (UID: \"15e41c90-a220-49bf-ac62-9653ee282da0\") " Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.774431 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8plpv\" (UniqueName: \"kubernetes.io/projected/08262c5b-0d62-4a80-9b03-76fc4d2297f3-kube-api-access-8plpv\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.774452 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08262c5b-0d62-4a80-9b03-76fc4d2297f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.775044 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15e41c90-a220-49bf-ac62-9653ee282da0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15e41c90-a220-49bf-ac62-9653ee282da0" (UID: "15e41c90-a220-49bf-ac62-9653ee282da0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.779802 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e41c90-a220-49bf-ac62-9653ee282da0-kube-api-access-tp7pc" (OuterVolumeSpecName: "kube-api-access-tp7pc") pod "15e41c90-a220-49bf-ac62-9653ee282da0" (UID: "15e41c90-a220-49bf-ac62-9653ee282da0"). InnerVolumeSpecName "kube-api-access-tp7pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.783014 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9znsh"] Feb 16 10:05:58 crc kubenswrapper[4814]: E0216 10:05:58.783515 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d36422e8-334d-414d-8d3f-b5a66ce72da2" containerName="mariadb-database-create" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.783565 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d36422e8-334d-414d-8d3f-b5a66ce72da2" containerName="mariadb-database-create" Feb 16 10:05:58 crc kubenswrapper[4814]: E0216 10:05:58.783601 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08262c5b-0d62-4a80-9b03-76fc4d2297f3" containerName="mariadb-database-create" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.783615 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="08262c5b-0d62-4a80-9b03-76fc4d2297f3" containerName="mariadb-database-create" Feb 16 10:05:58 crc kubenswrapper[4814]: E0216 10:05:58.783640 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e41c90-a220-49bf-ac62-9653ee282da0" containerName="mariadb-account-create-update" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.783650 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e41c90-a220-49bf-ac62-9653ee282da0" containerName="mariadb-account-create-update" Feb 16 10:05:58 crc kubenswrapper[4814]: E0216 10:05:58.783660 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4cf8b58-cd5c-46a9-9513-89178d899f14" containerName="mariadb-account-create-update" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.783667 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4cf8b58-cd5c-46a9-9513-89178d899f14" containerName="mariadb-account-create-update" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.783921 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d36422e8-334d-414d-8d3f-b5a66ce72da2" containerName="mariadb-database-create" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.783951 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4cf8b58-cd5c-46a9-9513-89178d899f14" containerName="mariadb-account-create-update" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.783976 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="08262c5b-0d62-4a80-9b03-76fc4d2297f3" containerName="mariadb-database-create" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.783989 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e41c90-a220-49bf-ac62-9653ee282da0" containerName="mariadb-account-create-update" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.789199 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.792714 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jjcbj" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.793594 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.806819 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9znsh"] Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.876447 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15e41c90-a220-49bf-ac62-9653ee282da0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.877125 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp7pc\" (UniqueName: \"kubernetes.io/projected/15e41c90-a220-49bf-ac62-9653ee282da0-kube-api-access-tp7pc\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.978835 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-combined-ca-bundle\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.978973 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhmp2\" (UniqueName: \"kubernetes.io/projected/332682c6-8779-42d6-8445-1be863b81659-kube-api-access-dhmp2\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.979019 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-config-data\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:58 crc kubenswrapper[4814]: I0216 10:05:58.979046 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-db-sync-config-data\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.084025 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-combined-ca-bundle\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.084154 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhmp2\" (UniqueName: \"kubernetes.io/projected/332682c6-8779-42d6-8445-1be863b81659-kube-api-access-dhmp2\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.084176 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-config-data\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.084204 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-db-sync-config-data\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.088874 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-db-sync-config-data\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.093929 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-combined-ca-bundle\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.094095 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-config-data\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.111412 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhmp2\" (UniqueName: \"kubernetes.io/projected/332682c6-8779-42d6-8445-1be863b81659-kube-api-access-dhmp2\") pod \"glance-db-sync-9znsh\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.115917 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9znsh" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.116403 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d31f-account-create-update-jgnvs" event={"ID":"15e41c90-a220-49bf-ac62-9653ee282da0","Type":"ContainerDied","Data":"8b51eee156095caa516544179d00460b86e7c9324e2ee8fca9cd8386381c53e5"} Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.116480 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b51eee156095caa516544179d00460b86e7c9324e2ee8fca9cd8386381c53e5" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.116620 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d31f-account-create-update-jgnvs" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.119795 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6dxk8" event={"ID":"08262c5b-0d62-4a80-9b03-76fc4d2297f3","Type":"ContainerDied","Data":"7a5312503ade5f57f9e01b9eab7899a0a03723e3039276fab4e79d77ec2714d7"} Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.119864 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a5312503ade5f57f9e01b9eab7899a0a03723e3039276fab4e79d77ec2714d7" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.119820 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6dxk8" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.394462 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.405874 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36-etc-swift\") pod \"swift-storage-0\" (UID: \"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36\") " pod="openstack/swift-storage-0" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.475296 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.586479 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.599105 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmst7\" (UniqueName: \"kubernetes.io/projected/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-kube-api-access-zmst7\") pod \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.599236 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run\") pod \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.599299 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run-ovn\") pod \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.599396 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-additional-scripts\") pod \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.599409 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run" (OuterVolumeSpecName: "var-run") pod "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a" (UID: "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.599454 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-log-ovn\") pod \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.599655 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-scripts\") pod \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\" (UID: \"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a\") " Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.599512 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a" (UID: "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.599586 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a" (UID: "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.599924 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a" (UID: "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.600843 4814 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.600891 4814 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.600910 4814 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.600928 4814 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.600841 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-scripts" (OuterVolumeSpecName: "scripts") pod "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a" (UID: "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.617839 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-kube-api-access-zmst7" (OuterVolumeSpecName: "kube-api-access-zmst7") pod "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a" (UID: "66920cc8-61c6-4ec3-ba55-3a4d2c98f14a"). InnerVolumeSpecName "kube-api-access-zmst7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.702411 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:05:59 crc kubenswrapper[4814]: I0216 10:05:59.702479 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmst7\" (UniqueName: \"kubernetes.io/projected/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a-kube-api-access-zmst7\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.141583 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc2nv-config-cvjvb" event={"ID":"66920cc8-61c6-4ec3-ba55-3a4d2c98f14a","Type":"ContainerDied","Data":"902816deb521a17526989b0d1915533a24a5a7f4e9d649b33308d195c581467a"} Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.141655 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="902816deb521a17526989b0d1915533a24a5a7f4e9d649b33308d195c581467a" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.141672 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc2nv-config-cvjvb" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.147674 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-dc2nv" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.238083 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9znsh"] Feb 16 10:06:00 crc kubenswrapper[4814]: W0216 10:06:00.248276 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod332682c6_8779_42d6_8445_1be863b81659.slice/crio-147f256ba6f6fe83920c73b4dd0602d6c99c0669817e9a24ace1f1994563c4c8 WatchSource:0}: Error finding container 147f256ba6f6fe83920c73b4dd0602d6c99c0669817e9a24ace1f1994563c4c8: Status 404 returned error can't find the container with id 147f256ba6f6fe83920c73b4dd0602d6c99c0669817e9a24ace1f1994563c4c8 Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.321283 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dc2nv-config-cvjvb"] Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.334587 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9g9kc"] Feb 16 10:06:00 crc kubenswrapper[4814]: E0216 10:06:00.335107 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66920cc8-61c6-4ec3-ba55-3a4d2c98f14a" containerName="ovn-config" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.335128 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="66920cc8-61c6-4ec3-ba55-3a4d2c98f14a" containerName="ovn-config" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.335356 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="66920cc8-61c6-4ec3-ba55-3a4d2c98f14a" containerName="ovn-config" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.336122 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9g9kc" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.352015 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.352856 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dc2nv-config-cvjvb"] Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.360595 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 10:06:00 crc kubenswrapper[4814]: W0216 10:06:00.362756 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33b56eb5_3fe6_4c32_9ddd_13eb56ef8b36.slice/crio-8259c9514ee2565c89ed14e2075c4528449fe8675532bd7c2b153aa09dd33f57 WatchSource:0}: Error finding container 8259c9514ee2565c89ed14e2075c4528449fe8675532bd7c2b153aa09dd33f57: Status 404 returned error can't find the container with id 8259c9514ee2565c89ed14e2075c4528449fe8675532bd7c2b153aa09dd33f57 Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.381835 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9g9kc"] Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.423228 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38a1ba8d-68e9-490d-a39f-4f9367666263-operator-scripts\") pod \"root-account-create-update-9g9kc\" (UID: \"38a1ba8d-68e9-490d-a39f-4f9367666263\") " pod="openstack/root-account-create-update-9g9kc" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.423318 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sftcm\" (UniqueName: \"kubernetes.io/projected/38a1ba8d-68e9-490d-a39f-4f9367666263-kube-api-access-sftcm\") pod \"root-account-create-update-9g9kc\" (UID: \"38a1ba8d-68e9-490d-a39f-4f9367666263\") " pod="openstack/root-account-create-update-9g9kc" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.525559 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38a1ba8d-68e9-490d-a39f-4f9367666263-operator-scripts\") pod \"root-account-create-update-9g9kc\" (UID: \"38a1ba8d-68e9-490d-a39f-4f9367666263\") " pod="openstack/root-account-create-update-9g9kc" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.525618 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sftcm\" (UniqueName: \"kubernetes.io/projected/38a1ba8d-68e9-490d-a39f-4f9367666263-kube-api-access-sftcm\") pod \"root-account-create-update-9g9kc\" (UID: \"38a1ba8d-68e9-490d-a39f-4f9367666263\") " pod="openstack/root-account-create-update-9g9kc" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.526383 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38a1ba8d-68e9-490d-a39f-4f9367666263-operator-scripts\") pod \"root-account-create-update-9g9kc\" (UID: \"38a1ba8d-68e9-490d-a39f-4f9367666263\") " pod="openstack/root-account-create-update-9g9kc" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.551587 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sftcm\" (UniqueName: \"kubernetes.io/projected/38a1ba8d-68e9-490d-a39f-4f9367666263-kube-api-access-sftcm\") pod \"root-account-create-update-9g9kc\" (UID: \"38a1ba8d-68e9-490d-a39f-4f9367666263\") " pod="openstack/root-account-create-update-9g9kc" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.619640 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dc2nv-config-lvxpz"] Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.621168 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.624336 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.632044 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dc2nv-config-lvxpz"] Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.672941 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9g9kc" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.729086 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run-ovn\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.729163 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-scripts\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.729195 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq7m4\" (UniqueName: \"kubernetes.io/projected/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-kube-api-access-cq7m4\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.729223 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-additional-scripts\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.729262 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-log-ovn\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.729886 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.832486 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.832595 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run-ovn\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.832620 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-scripts\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.832647 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq7m4\" (UniqueName: \"kubernetes.io/projected/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-kube-api-access-cq7m4\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.832681 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-additional-scripts\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.832722 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-log-ovn\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.833319 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-log-ovn\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.833386 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.833427 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run-ovn\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.834852 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-additional-scripts\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.838371 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-scripts\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.855714 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq7m4\" (UniqueName: \"kubernetes.io/projected/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-kube-api-access-cq7m4\") pod \"ovn-controller-dc2nv-config-lvxpz\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:00 crc kubenswrapper[4814]: I0216 10:06:00.953511 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:01 crc kubenswrapper[4814]: I0216 10:06:01.019269 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66920cc8-61c6-4ec3-ba55-3a4d2c98f14a" path="/var/lib/kubelet/pods/66920cc8-61c6-4ec3-ba55-3a4d2c98f14a/volumes" Feb 16 10:06:01 crc kubenswrapper[4814]: I0216 10:06:01.056935 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9g9kc"] Feb 16 10:06:01 crc kubenswrapper[4814]: I0216 10:06:01.157510 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9znsh" event={"ID":"332682c6-8779-42d6-8445-1be863b81659","Type":"ContainerStarted","Data":"147f256ba6f6fe83920c73b4dd0602d6c99c0669817e9a24ace1f1994563c4c8"} Feb 16 10:06:01 crc kubenswrapper[4814]: I0216 10:06:01.159238 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"8259c9514ee2565c89ed14e2075c4528449fe8675532bd7c2b153aa09dd33f57"} Feb 16 10:06:01 crc kubenswrapper[4814]: W0216 10:06:01.287056 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38a1ba8d_68e9_490d_a39f_4f9367666263.slice/crio-af6da6d0145e5a7b55297cd217c99ab827a918085cd4048b1be81cc74f83f31a WatchSource:0}: Error finding container af6da6d0145e5a7b55297cd217c99ab827a918085cd4048b1be81cc74f83f31a: Status 404 returned error can't find the container with id af6da6d0145e5a7b55297cd217c99ab827a918085cd4048b1be81cc74f83f31a Feb 16 10:06:01 crc kubenswrapper[4814]: I0216 10:06:01.917296 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dc2nv-config-lvxpz"] Feb 16 10:06:01 crc kubenswrapper[4814]: W0216 10:06:01.923247 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08ad78e7_0dea_49de_99ef_c583a6f3b0d6.slice/crio-9ee0f78f1fdbaeaf97ebf9d28868f26b1f2fc2aa740c7c37805ee6eea40cf5e2 WatchSource:0}: Error finding container 9ee0f78f1fdbaeaf97ebf9d28868f26b1f2fc2aa740c7c37805ee6eea40cf5e2: Status 404 returned error can't find the container with id 9ee0f78f1fdbaeaf97ebf9d28868f26b1f2fc2aa740c7c37805ee6eea40cf5e2 Feb 16 10:06:02 crc kubenswrapper[4814]: I0216 10:06:02.182289 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc2nv-config-lvxpz" event={"ID":"08ad78e7-0dea-49de-99ef-c583a6f3b0d6","Type":"ContainerStarted","Data":"9ee0f78f1fdbaeaf97ebf9d28868f26b1f2fc2aa740c7c37805ee6eea40cf5e2"} Feb 16 10:06:02 crc kubenswrapper[4814]: I0216 10:06:02.186843 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"c471573c6ee93ffc49e914abc369bd87cbaed82c51999f709e65a38b74b703c9"} Feb 16 10:06:02 crc kubenswrapper[4814]: I0216 10:06:02.186890 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"e6c5a7f462a9b263e4d6e0c18718e3a9b26fe6db92d5d9a5d1e082c2e055e566"} Feb 16 10:06:02 crc kubenswrapper[4814]: I0216 10:06:02.191079 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9g9kc" event={"ID":"38a1ba8d-68e9-490d-a39f-4f9367666263","Type":"ContainerStarted","Data":"879693e95db7e0dd088e0c9bcae4397573218786664ad510d2f6ed4f3ab28ec5"} Feb 16 10:06:02 crc kubenswrapper[4814]: I0216 10:06:02.191125 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9g9kc" event={"ID":"38a1ba8d-68e9-490d-a39f-4f9367666263","Type":"ContainerStarted","Data":"af6da6d0145e5a7b55297cd217c99ab827a918085cd4048b1be81cc74f83f31a"} Feb 16 10:06:02 crc kubenswrapper[4814]: I0216 10:06:02.215383 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-9g9kc" podStartSLOduration=2.215363816 podStartE2EDuration="2.215363816s" podCreationTimestamp="2026-02-16 10:06:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:06:02.214834462 +0000 UTC m=+1219.907990642" watchObservedRunningTime="2026-02-16 10:06:02.215363816 +0000 UTC m=+1219.908519996" Feb 16 10:06:03 crc kubenswrapper[4814]: I0216 10:06:03.207483 4814 generic.go:334] "Generic (PLEG): container finished" podID="38a1ba8d-68e9-490d-a39f-4f9367666263" containerID="879693e95db7e0dd088e0c9bcae4397573218786664ad510d2f6ed4f3ab28ec5" exitCode=0 Feb 16 10:06:03 crc kubenswrapper[4814]: I0216 10:06:03.207643 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9g9kc" event={"ID":"38a1ba8d-68e9-490d-a39f-4f9367666263","Type":"ContainerDied","Data":"879693e95db7e0dd088e0c9bcae4397573218786664ad510d2f6ed4f3ab28ec5"} Feb 16 10:06:03 crc kubenswrapper[4814]: I0216 10:06:03.211905 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc2nv-config-lvxpz" event={"ID":"08ad78e7-0dea-49de-99ef-c583a6f3b0d6","Type":"ContainerStarted","Data":"dc3b1cfcec1081750a0ebdb74921aa2359f9a8690ae3b5073f48e830622fd98d"} Feb 16 10:06:03 crc kubenswrapper[4814]: I0216 10:06:03.214573 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"a7b4bb740e75761ec8533c10eea4ecb93d4c5e7c5ba7b166c3b53a26eac3833e"} Feb 16 10:06:03 crc kubenswrapper[4814]: I0216 10:06:03.214618 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"bd5413c1f0f441d55a86d9017f753d8e655f4c08e419bb65b429cca643ec5172"} Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.227795 4814 generic.go:334] "Generic (PLEG): container finished" podID="08ad78e7-0dea-49de-99ef-c583a6f3b0d6" containerID="dc3b1cfcec1081750a0ebdb74921aa2359f9a8690ae3b5073f48e830622fd98d" exitCode=0 Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.228024 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc2nv-config-lvxpz" event={"ID":"08ad78e7-0dea-49de-99ef-c583a6f3b0d6","Type":"ContainerDied","Data":"dc3b1cfcec1081750a0ebdb74921aa2359f9a8690ae3b5073f48e830622fd98d"} Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.247133 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"3df4fadddc3727c49f496691d485b3223d40a3b79f3fe6defabb68d650334739"} Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.638294 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.704216 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9g9kc" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.739643 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run-ovn\") pod \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.739812 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run\") pod \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.739877 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-scripts\") pod \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.739898 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38a1ba8d-68e9-490d-a39f-4f9367666263-operator-scripts\") pod \"38a1ba8d-68e9-490d-a39f-4f9367666263\" (UID: \"38a1ba8d-68e9-490d-a39f-4f9367666263\") " Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.739938 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-log-ovn\") pod \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.739976 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sftcm\" (UniqueName: \"kubernetes.io/projected/38a1ba8d-68e9-490d-a39f-4f9367666263-kube-api-access-sftcm\") pod \"38a1ba8d-68e9-490d-a39f-4f9367666263\" (UID: \"38a1ba8d-68e9-490d-a39f-4f9367666263\") " Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.740030 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-additional-scripts\") pod \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.740057 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq7m4\" (UniqueName: \"kubernetes.io/projected/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-kube-api-access-cq7m4\") pod \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\" (UID: \"08ad78e7-0dea-49de-99ef-c583a6f3b0d6\") " Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.742245 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run" (OuterVolumeSpecName: "var-run") pod "08ad78e7-0dea-49de-99ef-c583a6f3b0d6" (UID: "08ad78e7-0dea-49de-99ef-c583a6f3b0d6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.742322 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "08ad78e7-0dea-49de-99ef-c583a6f3b0d6" (UID: "08ad78e7-0dea-49de-99ef-c583a6f3b0d6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.742611 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38a1ba8d-68e9-490d-a39f-4f9367666263-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38a1ba8d-68e9-490d-a39f-4f9367666263" (UID: "38a1ba8d-68e9-490d-a39f-4f9367666263"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.743109 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "08ad78e7-0dea-49de-99ef-c583a6f3b0d6" (UID: "08ad78e7-0dea-49de-99ef-c583a6f3b0d6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.743644 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "08ad78e7-0dea-49de-99ef-c583a6f3b0d6" (UID: "08ad78e7-0dea-49de-99ef-c583a6f3b0d6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.744259 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-scripts" (OuterVolumeSpecName: "scripts") pod "08ad78e7-0dea-49de-99ef-c583a6f3b0d6" (UID: "08ad78e7-0dea-49de-99ef-c583a6f3b0d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.748804 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-kube-api-access-cq7m4" (OuterVolumeSpecName: "kube-api-access-cq7m4") pod "08ad78e7-0dea-49de-99ef-c583a6f3b0d6" (UID: "08ad78e7-0dea-49de-99ef-c583a6f3b0d6"). InnerVolumeSpecName "kube-api-access-cq7m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.748871 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a1ba8d-68e9-490d-a39f-4f9367666263-kube-api-access-sftcm" (OuterVolumeSpecName: "kube-api-access-sftcm") pod "38a1ba8d-68e9-490d-a39f-4f9367666263" (UID: "38a1ba8d-68e9-490d-a39f-4f9367666263"). InnerVolumeSpecName "kube-api-access-sftcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.842334 4814 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.842386 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.842399 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38a1ba8d-68e9-490d-a39f-4f9367666263-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.842414 4814 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.842427 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sftcm\" (UniqueName: \"kubernetes.io/projected/38a1ba8d-68e9-490d-a39f-4f9367666263-kube-api-access-sftcm\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.842440 4814 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.842454 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq7m4\" (UniqueName: \"kubernetes.io/projected/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-kube-api-access-cq7m4\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:04 crc kubenswrapper[4814]: I0216 10:06:04.842466 4814 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08ad78e7-0dea-49de-99ef-c583a6f3b0d6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:05 crc kubenswrapper[4814]: I0216 10:06:05.266593 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"4b2cd1e4316877e7fe3404b3e1f24f671f057405bc91a18ec5cc65dd4c63218c"} Feb 16 10:06:05 crc kubenswrapper[4814]: I0216 10:06:05.267468 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"5d50a082e2dfb13b4394f4ac25dda8d2123fd1f744dd91150f748097d998f2fd"} Feb 16 10:06:05 crc kubenswrapper[4814]: I0216 10:06:05.267489 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"54a487ce5d6d1e33ad8727d96c2075160609d96c6943ed2daf4563dd26f3b830"} Feb 16 10:06:05 crc kubenswrapper[4814]: I0216 10:06:05.271009 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9g9kc" Feb 16 10:06:05 crc kubenswrapper[4814]: I0216 10:06:05.271022 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9g9kc" event={"ID":"38a1ba8d-68e9-490d-a39f-4f9367666263","Type":"ContainerDied","Data":"af6da6d0145e5a7b55297cd217c99ab827a918085cd4048b1be81cc74f83f31a"} Feb 16 10:06:05 crc kubenswrapper[4814]: I0216 10:06:05.271107 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af6da6d0145e5a7b55297cd217c99ab827a918085cd4048b1be81cc74f83f31a" Feb 16 10:06:05 crc kubenswrapper[4814]: I0216 10:06:05.278763 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dc2nv-config-lvxpz" event={"ID":"08ad78e7-0dea-49de-99ef-c583a6f3b0d6","Type":"ContainerDied","Data":"9ee0f78f1fdbaeaf97ebf9d28868f26b1f2fc2aa740c7c37805ee6eea40cf5e2"} Feb 16 10:06:05 crc kubenswrapper[4814]: I0216 10:06:05.278822 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ee0f78f1fdbaeaf97ebf9d28868f26b1f2fc2aa740c7c37805ee6eea40cf5e2" Feb 16 10:06:05 crc kubenswrapper[4814]: I0216 10:06:05.278939 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dc2nv-config-lvxpz" Feb 16 10:06:05 crc kubenswrapper[4814]: I0216 10:06:05.752728 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dc2nv-config-lvxpz"] Feb 16 10:06:05 crc kubenswrapper[4814]: I0216 10:06:05.769082 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dc2nv-config-lvxpz"] Feb 16 10:06:06 crc kubenswrapper[4814]: I0216 10:06:06.633965 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9g9kc"] Feb 16 10:06:06 crc kubenswrapper[4814]: I0216 10:06:06.643698 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9g9kc"] Feb 16 10:06:07 crc kubenswrapper[4814]: I0216 10:06:07.010628 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08ad78e7-0dea-49de-99ef-c583a6f3b0d6" path="/var/lib/kubelet/pods/08ad78e7-0dea-49de-99ef-c583a6f3b0d6/volumes" Feb 16 10:06:07 crc kubenswrapper[4814]: I0216 10:06:07.013009 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a1ba8d-68e9-490d-a39f-4f9367666263" path="/var/lib/kubelet/pods/38a1ba8d-68e9-490d-a39f-4f9367666263/volumes" Feb 16 10:06:07 crc kubenswrapper[4814]: I0216 10:06:07.324424 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"abe3d95f622c2cd72ab1161c9f93f483e8fbd4ed06fae1cb4eaabe7d83faa678"} Feb 16 10:06:07 crc kubenswrapper[4814]: I0216 10:06:07.324495 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"4fb16184b64ea1c748a97dd75cd41166db4054efeafbfd1f4d9a9d5e60c8cf61"} Feb 16 10:06:07 crc kubenswrapper[4814]: I0216 10:06:07.324514 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"3c69426757251d900480dbb56405acf2dc96989fcc2267ef7639122090f34e77"} Feb 16 10:06:07 crc kubenswrapper[4814]: I0216 10:06:07.324526 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"488b7384b0bf6df2282ce06fa5e6caf1237b29efd4064bbb394439be9a06c7a0"} Feb 16 10:06:07 crc kubenswrapper[4814]: I0216 10:06:07.324731 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"320833452a7a0d9c0f6a3863cf82a9391b53d44d4ea7d29354c975102d697acd"} Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:07.534184 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:07.537793 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:07.960874 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:07.961498 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.348446 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"dacffc90735b17a2371f937348063eccd7ddbdb9554f43597c11ffc61664fb6f"} Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.348614 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36","Type":"ContainerStarted","Data":"3d5ba75ca02adbdcf225faeaed10ad81c00685507524a45b61c004262b170342"} Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.350204 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.411956 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.04771211 podStartE2EDuration="42.411932041s" podCreationTimestamp="2026-02-16 10:05:26 +0000 UTC" firstStartedPulling="2026-02-16 10:06:00.376202607 +0000 UTC m=+1218.069358787" lastFinishedPulling="2026-02-16 10:06:05.740422538 +0000 UTC m=+1223.433578718" observedRunningTime="2026-02-16 10:06:08.401655326 +0000 UTC m=+1226.094811506" watchObservedRunningTime="2026-02-16 10:06:08.411932041 +0000 UTC m=+1226.105088221" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.853295 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c9996885f-mkwrs"] Feb 16 10:06:08 crc kubenswrapper[4814]: E0216 10:06:08.853827 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ad78e7-0dea-49de-99ef-c583a6f3b0d6" containerName="ovn-config" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.853852 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ad78e7-0dea-49de-99ef-c583a6f3b0d6" containerName="ovn-config" Feb 16 10:06:08 crc kubenswrapper[4814]: E0216 10:06:08.853864 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38a1ba8d-68e9-490d-a39f-4f9367666263" containerName="mariadb-account-create-update" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.853872 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="38a1ba8d-68e9-490d-a39f-4f9367666263" containerName="mariadb-account-create-update" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.854060 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="38a1ba8d-68e9-490d-a39f-4f9367666263" containerName="mariadb-account-create-update" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.854097 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ad78e7-0dea-49de-99ef-c583a6f3b0d6" containerName="ovn-config" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.855164 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.863909 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.932890 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9996885f-mkwrs"] Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.934331 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-config\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.934470 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-svc\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.934513 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.935033 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.935346 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:08 crc kubenswrapper[4814]: I0216 10:06:08.935417 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j49ks\" (UniqueName: \"kubernetes.io/projected/a6929b69-85c9-4084-9ff5-4e3a6af602dd-kube-api-access-j49ks\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.036828 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.036897 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j49ks\" (UniqueName: \"kubernetes.io/projected/a6929b69-85c9-4084-9ff5-4e3a6af602dd-kube-api-access-j49ks\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.036935 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-config\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.037014 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-svc\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.037048 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.037149 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.038831 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.039053 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.039343 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-config\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.039605 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-svc\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.039948 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.064401 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j49ks\" (UniqueName: \"kubernetes.io/projected/a6929b69-85c9-4084-9ff5-4e3a6af602dd-kube-api-access-j49ks\") pod \"dnsmasq-dns-6c9996885f-mkwrs\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.174819 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:09 crc kubenswrapper[4814]: I0216 10:06:09.833954 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9996885f-mkwrs"] Feb 16 10:06:10 crc kubenswrapper[4814]: I0216 10:06:10.301289 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="6a0b4bfb-2144-4fd9-be15-07396c44a11c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 16 10:06:10 crc kubenswrapper[4814]: I0216 10:06:10.310100 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b4e759af-f091-47c0-accc-c68b45b277fa" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Feb 16 10:06:10 crc kubenswrapper[4814]: I0216 10:06:10.381811 4814 generic.go:334] "Generic (PLEG): container finished" podID="a6929b69-85c9-4084-9ff5-4e3a6af602dd" containerID="c182a7239b640c357a29829a6eedac2d3178459acaca2202a7a8c5071cebd7d4" exitCode=0 Feb 16 10:06:10 crc kubenswrapper[4814]: I0216 10:06:10.381866 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" event={"ID":"a6929b69-85c9-4084-9ff5-4e3a6af602dd","Type":"ContainerDied","Data":"c182a7239b640c357a29829a6eedac2d3178459acaca2202a7a8c5071cebd7d4"} Feb 16 10:06:10 crc kubenswrapper[4814]: I0216 10:06:10.381905 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" event={"ID":"a6929b69-85c9-4084-9ff5-4e3a6af602dd","Type":"ContainerStarted","Data":"9a39520aeee0ce611da85ecce2e7bebf74c86f9eede4a3ce54800fb878184794"} Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.133373 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="19661670-37f9-4577-93d4-cd87303f3008" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.641071 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-88dqn"] Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.642488 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-88dqn" Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.645329 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.673968 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-88dqn"] Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.693840 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac41f60a-214e-4093-ae06-4491ce820f53-operator-scripts\") pod \"root-account-create-update-88dqn\" (UID: \"ac41f60a-214e-4093-ae06-4491ce820f53\") " pod="openstack/root-account-create-update-88dqn" Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.693917 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlpzw\" (UniqueName: \"kubernetes.io/projected/ac41f60a-214e-4093-ae06-4491ce820f53-kube-api-access-hlpzw\") pod \"root-account-create-update-88dqn\" (UID: \"ac41f60a-214e-4093-ae06-4491ce820f53\") " pod="openstack/root-account-create-update-88dqn" Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.796385 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac41f60a-214e-4093-ae06-4491ce820f53-operator-scripts\") pod \"root-account-create-update-88dqn\" (UID: \"ac41f60a-214e-4093-ae06-4491ce820f53\") " pod="openstack/root-account-create-update-88dqn" Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.796508 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlpzw\" (UniqueName: \"kubernetes.io/projected/ac41f60a-214e-4093-ae06-4491ce820f53-kube-api-access-hlpzw\") pod \"root-account-create-update-88dqn\" (UID: \"ac41f60a-214e-4093-ae06-4491ce820f53\") " pod="openstack/root-account-create-update-88dqn" Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.797862 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac41f60a-214e-4093-ae06-4491ce820f53-operator-scripts\") pod \"root-account-create-update-88dqn\" (UID: \"ac41f60a-214e-4093-ae06-4491ce820f53\") " pod="openstack/root-account-create-update-88dqn" Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.824511 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlpzw\" (UniqueName: \"kubernetes.io/projected/ac41f60a-214e-4093-ae06-4491ce820f53-kube-api-access-hlpzw\") pod \"root-account-create-update-88dqn\" (UID: \"ac41f60a-214e-4093-ae06-4491ce820f53\") " pod="openstack/root-account-create-update-88dqn" Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.846693 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.847091 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="prometheus" containerID="cri-o://c1efa8e6033f67f0eccc7a1db7c17256aac48945f0774924100251653d0e2d30" gracePeriod=600 Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.847188 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="config-reloader" containerID="cri-o://8e1709bbe8837ab504fa2a3897057bdf723da315534460f4205aeeddfe80de75" gracePeriod=600 Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.847228 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="thanos-sidecar" containerID="cri-o://2c46ab5dcfa13b2c38db786abffab6e62cd2af9558795c0ee42ae18e4fb8056f" gracePeriod=600 Feb 16 10:06:11 crc kubenswrapper[4814]: I0216 10:06:11.984475 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-88dqn" Feb 16 10:06:12 crc kubenswrapper[4814]: I0216 10:06:12.433107 4814 generic.go:334] "Generic (PLEG): container finished" podID="9320085e-0598-4822-aa1d-5b2f9469f573" containerID="2c46ab5dcfa13b2c38db786abffab6e62cd2af9558795c0ee42ae18e4fb8056f" exitCode=0 Feb 16 10:06:12 crc kubenswrapper[4814]: I0216 10:06:12.433494 4814 generic.go:334] "Generic (PLEG): container finished" podID="9320085e-0598-4822-aa1d-5b2f9469f573" containerID="8e1709bbe8837ab504fa2a3897057bdf723da315534460f4205aeeddfe80de75" exitCode=0 Feb 16 10:06:12 crc kubenswrapper[4814]: I0216 10:06:12.433509 4814 generic.go:334] "Generic (PLEG): container finished" podID="9320085e-0598-4822-aa1d-5b2f9469f573" containerID="c1efa8e6033f67f0eccc7a1db7c17256aac48945f0774924100251653d0e2d30" exitCode=0 Feb 16 10:06:12 crc kubenswrapper[4814]: I0216 10:06:12.433565 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9320085e-0598-4822-aa1d-5b2f9469f573","Type":"ContainerDied","Data":"2c46ab5dcfa13b2c38db786abffab6e62cd2af9558795c0ee42ae18e4fb8056f"} Feb 16 10:06:12 crc kubenswrapper[4814]: I0216 10:06:12.433602 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9320085e-0598-4822-aa1d-5b2f9469f573","Type":"ContainerDied","Data":"8e1709bbe8837ab504fa2a3897057bdf723da315534460f4205aeeddfe80de75"} Feb 16 10:06:12 crc kubenswrapper[4814]: I0216 10:06:12.433617 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9320085e-0598-4822-aa1d-5b2f9469f573","Type":"ContainerDied","Data":"c1efa8e6033f67f0eccc7a1db7c17256aac48945f0774924100251653d0e2d30"} Feb 16 10:06:12 crc kubenswrapper[4814]: I0216 10:06:12.533708 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.113:9090/-/ready\": dial tcp 10.217.0.113:9090: connect: connection refused" Feb 16 10:06:17 crc kubenswrapper[4814]: I0216 10:06:17.533784 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.113:9090/-/ready\": dial tcp 10.217.0.113:9090: connect: connection refused" Feb 16 10:06:19 crc kubenswrapper[4814]: E0216 10:06:19.357709 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Feb 16 10:06:19 crc kubenswrapper[4814]: E0216 10:06:19.358333 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Feb 16 10:06:19 crc kubenswrapper[4814]: E0216 10:06:19.359025 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.164:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhmp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-9znsh_openstack(332682c6-8779-42d6-8445-1be863b81659): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:06:19 crc kubenswrapper[4814]: E0216 10:06:19.360526 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-9znsh" podUID="332682c6-8779-42d6-8445-1be863b81659" Feb 16 10:06:19 crc kubenswrapper[4814]: E0216 10:06:19.508470 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-9znsh" podUID="332682c6-8779-42d6-8445-1be863b81659" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.749182 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.888869 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6vtp\" (UniqueName: \"kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-kube-api-access-f6vtp\") pod \"9320085e-0598-4822-aa1d-5b2f9469f573\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.888949 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-thanos-prometheus-http-client-file\") pod \"9320085e-0598-4822-aa1d-5b2f9469f573\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.889044 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-web-config\") pod \"9320085e-0598-4822-aa1d-5b2f9469f573\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.889166 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-config\") pod \"9320085e-0598-4822-aa1d-5b2f9469f573\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.889219 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-2\") pod \"9320085e-0598-4822-aa1d-5b2f9469f573\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.889401 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-1\") pod \"9320085e-0598-4822-aa1d-5b2f9469f573\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.889618 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\") pod \"9320085e-0598-4822-aa1d-5b2f9469f573\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.889678 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-0\") pod \"9320085e-0598-4822-aa1d-5b2f9469f573\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.889789 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-tls-assets\") pod \"9320085e-0598-4822-aa1d-5b2f9469f573\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.889822 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9320085e-0598-4822-aa1d-5b2f9469f573-config-out\") pod \"9320085e-0598-4822-aa1d-5b2f9469f573\" (UID: \"9320085e-0598-4822-aa1d-5b2f9469f573\") " Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.890648 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9320085e-0598-4822-aa1d-5b2f9469f573" (UID: "9320085e-0598-4822-aa1d-5b2f9469f573"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.890667 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9320085e-0598-4822-aa1d-5b2f9469f573" (UID: "9320085e-0598-4822-aa1d-5b2f9469f573"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.898970 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-kube-api-access-f6vtp" (OuterVolumeSpecName: "kube-api-access-f6vtp") pod "9320085e-0598-4822-aa1d-5b2f9469f573" (UID: "9320085e-0598-4822-aa1d-5b2f9469f573"). InnerVolumeSpecName "kube-api-access-f6vtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.895020 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9320085e-0598-4822-aa1d-5b2f9469f573" (UID: "9320085e-0598-4822-aa1d-5b2f9469f573"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.899807 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-config" (OuterVolumeSpecName: "config") pod "9320085e-0598-4822-aa1d-5b2f9469f573" (UID: "9320085e-0598-4822-aa1d-5b2f9469f573"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.900011 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9320085e-0598-4822-aa1d-5b2f9469f573" (UID: "9320085e-0598-4822-aa1d-5b2f9469f573"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.903925 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9320085e-0598-4822-aa1d-5b2f9469f573-config-out" (OuterVolumeSpecName: "config-out") pod "9320085e-0598-4822-aa1d-5b2f9469f573" (UID: "9320085e-0598-4822-aa1d-5b2f9469f573"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.924128 4814 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.924278 4814 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.925775 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9320085e-0598-4822-aa1d-5b2f9469f573" (UID: "9320085e-0598-4822-aa1d-5b2f9469f573"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.926161 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9320085e-0598-4822-aa1d-5b2f9469f573" (UID: "9320085e-0598-4822-aa1d-5b2f9469f573"). InnerVolumeSpecName "pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.940201 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-88dqn"] Feb 16 10:06:19 crc kubenswrapper[4814]: I0216 10:06:19.950996 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-web-config" (OuterVolumeSpecName: "web-config") pod "9320085e-0598-4822-aa1d-5b2f9469f573" (UID: "9320085e-0598-4822-aa1d-5b2f9469f573"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.026991 4814 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.027027 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.027037 4814 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9320085e-0598-4822-aa1d-5b2f9469f573-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.027069 4814 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\") on node \"crc\" " Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.027081 4814 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.027093 4814 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9320085e-0598-4822-aa1d-5b2f9469f573-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.027104 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6vtp\" (UniqueName: \"kubernetes.io/projected/9320085e-0598-4822-aa1d-5b2f9469f573-kube-api-access-f6vtp\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.027116 4814 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9320085e-0598-4822-aa1d-5b2f9469f573-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.056835 4814 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.057023 4814 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c") on node "crc" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.136492 4814 reconciler_common.go:293] "Volume detached for volume \"pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.302916 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.311796 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.528412 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9320085e-0598-4822-aa1d-5b2f9469f573","Type":"ContainerDied","Data":"c385e7263af8b21810c55f084f19c91629f2d2592bc42b93cf53e48dbafda933"} Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.528881 4814 scope.go:117] "RemoveContainer" containerID="2c46ab5dcfa13b2c38db786abffab6e62cd2af9558795c0ee42ae18e4fb8056f" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.528666 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.530194 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" event={"ID":"a6929b69-85c9-4084-9ff5-4e3a6af602dd","Type":"ContainerStarted","Data":"8136a7b9e0d23176422f169ef301f208938d614ee67b9aa08097ffa2eea1bc17"} Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.531252 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.533227 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-88dqn" event={"ID":"ac41f60a-214e-4093-ae06-4491ce820f53","Type":"ContainerStarted","Data":"1b6509ca2734d4f9021cc85992c419fc68e301acbdb3faf48b528c4e8e2f5950"} Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.533253 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-88dqn" event={"ID":"ac41f60a-214e-4093-ae06-4491ce820f53","Type":"ContainerStarted","Data":"dd8357ff2ec3860d7d124efe1d31be7cac1e5eeb1e7ff4477108cb9a08efdbfc"} Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.559003 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" podStartSLOduration=12.558975036 podStartE2EDuration="12.558975036s" podCreationTimestamp="2026-02-16 10:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:06:20.558565905 +0000 UTC m=+1238.251722105" watchObservedRunningTime="2026-02-16 10:06:20.558975036 +0000 UTC m=+1238.252131216" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.578155 4814 scope.go:117] "RemoveContainer" containerID="8e1709bbe8837ab504fa2a3897057bdf723da315534460f4205aeeddfe80de75" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.588429 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-88dqn" podStartSLOduration=9.588405269999999 podStartE2EDuration="9.58840527s" podCreationTimestamp="2026-02-16 10:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:06:20.585608403 +0000 UTC m=+1238.278764603" watchObservedRunningTime="2026-02-16 10:06:20.58840527 +0000 UTC m=+1238.281561460" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.624055 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.630887 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.631199 4814 scope.go:117] "RemoveContainer" containerID="c1efa8e6033f67f0eccc7a1db7c17256aac48945f0774924100251653d0e2d30" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.667785 4814 scope.go:117] "RemoveContainer" containerID="077c41c360689d8f2e76ccda73a35ea7fde697cbec2d1cd364fa21bf2abe4717" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.688623 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 10:06:20 crc kubenswrapper[4814]: E0216 10:06:20.689038 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="init-config-reloader" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.689063 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="init-config-reloader" Feb 16 10:06:20 crc kubenswrapper[4814]: E0216 10:06:20.689072 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="thanos-sidecar" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.689083 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="thanos-sidecar" Feb 16 10:06:20 crc kubenswrapper[4814]: E0216 10:06:20.689123 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="prometheus" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.689130 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="prometheus" Feb 16 10:06:20 crc kubenswrapper[4814]: E0216 10:06:20.689155 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="config-reloader" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.689161 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="config-reloader" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.689350 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="prometheus" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.689374 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="thanos-sidecar" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.689382 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" containerName="config-reloader" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.696896 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.703341 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.705127 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.705164 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.705579 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.705795 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.705922 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.706494 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.706661 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-qbpqm" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.706822 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.711180 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757195 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-config\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757259 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757291 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757315 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757345 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757408 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxgzb\" (UniqueName: \"kubernetes.io/projected/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-kube-api-access-wxgzb\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757456 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757483 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757510 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757527 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757580 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757615 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.757649 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.859756 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.859845 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.859896 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-config\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.859944 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.859990 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.860032 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.860093 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.860140 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxgzb\" (UniqueName: \"kubernetes.io/projected/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-kube-api-access-wxgzb\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.860194 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.860221 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.860255 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.860280 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.860309 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.860986 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.861708 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.862181 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.866062 4814 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.866256 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/90bf6676d2b1c4d0c7b45da57bbcb46d490752accd713708e5a50469d2e9677d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.869356 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.872147 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.872204 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.873569 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.877940 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.878587 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.886427 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-config\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.887599 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.892713 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxgzb\" (UniqueName: \"kubernetes.io/projected/d64fe4ad-1b8d-4f94-b825-675bb6bd7f89-kube-api-access-wxgzb\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:20 crc kubenswrapper[4814]: I0216 10:06:20.929263 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bbf5aacd-befd-4a98-9189-b3fde7716d9c\") pod \"prometheus-metric-storage-0\" (UID: \"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89\") " pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:21 crc kubenswrapper[4814]: I0216 10:06:21.005732 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9320085e-0598-4822-aa1d-5b2f9469f573" path="/var/lib/kubelet/pods/9320085e-0598-4822-aa1d-5b2f9469f573/volumes" Feb 16 10:06:21 crc kubenswrapper[4814]: I0216 10:06:21.071384 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:21 crc kubenswrapper[4814]: I0216 10:06:21.132755 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 10:06:21 crc kubenswrapper[4814]: I0216 10:06:21.546045 4814 generic.go:334] "Generic (PLEG): container finished" podID="ac41f60a-214e-4093-ae06-4491ce820f53" containerID="1b6509ca2734d4f9021cc85992c419fc68e301acbdb3faf48b528c4e8e2f5950" exitCode=0 Feb 16 10:06:21 crc kubenswrapper[4814]: I0216 10:06:21.546216 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-88dqn" event={"ID":"ac41f60a-214e-4093-ae06-4491ce820f53","Type":"ContainerDied","Data":"1b6509ca2734d4f9021cc85992c419fc68e301acbdb3faf48b528c4e8e2f5950"} Feb 16 10:06:21 crc kubenswrapper[4814]: I0216 10:06:21.589171 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.365670 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-czp2g"] Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.368265 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-czp2g" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.379978 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-czp2g"] Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.504344 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-operator-scripts\") pod \"cinder-db-create-czp2g\" (UID: \"0b2e189e-8b3c-47a6-840f-9bca1dc9a429\") " pod="openstack/cinder-db-create-czp2g" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.504432 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fqkm\" (UniqueName: \"kubernetes.io/projected/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-kube-api-access-6fqkm\") pod \"cinder-db-create-czp2g\" (UID: \"0b2e189e-8b3c-47a6-840f-9bca1dc9a429\") " pod="openstack/cinder-db-create-czp2g" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.555658 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89","Type":"ContainerStarted","Data":"8cd6e49ea81dcbd8c2d592d6a4ab6b043eec84cf554392ea2822e9bba3c7e902"} Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.605747 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-operator-scripts\") pod \"cinder-db-create-czp2g\" (UID: \"0b2e189e-8b3c-47a6-840f-9bca1dc9a429\") " pod="openstack/cinder-db-create-czp2g" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.606219 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fqkm\" (UniqueName: \"kubernetes.io/projected/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-kube-api-access-6fqkm\") pod \"cinder-db-create-czp2g\" (UID: \"0b2e189e-8b3c-47a6-840f-9bca1dc9a429\") " pod="openstack/cinder-db-create-czp2g" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.606861 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-operator-scripts\") pod \"cinder-db-create-czp2g\" (UID: \"0b2e189e-8b3c-47a6-840f-9bca1dc9a429\") " pod="openstack/cinder-db-create-czp2g" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.655506 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fqkm\" (UniqueName: \"kubernetes.io/projected/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-kube-api-access-6fqkm\") pod \"cinder-db-create-czp2g\" (UID: \"0b2e189e-8b3c-47a6-840f-9bca1dc9a429\") " pod="openstack/cinder-db-create-czp2g" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.714100 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-czp2g" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.733030 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a97d-account-create-update-9gppj"] Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.734213 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a97d-account-create-update-9gppj" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.742825 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.755704 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a97d-account-create-update-9gppj"] Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.917785 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfklg\" (UniqueName: \"kubernetes.io/projected/9498592a-bccd-4780-bee1-7bcf7ab10ad2-kube-api-access-mfklg\") pod \"cinder-a97d-account-create-update-9gppj\" (UID: \"9498592a-bccd-4780-bee1-7bcf7ab10ad2\") " pod="openstack/cinder-a97d-account-create-update-9gppj" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.917823 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9498592a-bccd-4780-bee1-7bcf7ab10ad2-operator-scripts\") pod \"cinder-a97d-account-create-update-9gppj\" (UID: \"9498592a-bccd-4780-bee1-7bcf7ab10ad2\") " pod="openstack/cinder-a97d-account-create-update-9gppj" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.932368 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-7wb47"] Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.933916 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7wb47" Feb 16 10:06:22 crc kubenswrapper[4814]: I0216 10:06:22.961266 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7wb47"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.020173 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfklg\" (UniqueName: \"kubernetes.io/projected/9498592a-bccd-4780-bee1-7bcf7ab10ad2-kube-api-access-mfklg\") pod \"cinder-a97d-account-create-update-9gppj\" (UID: \"9498592a-bccd-4780-bee1-7bcf7ab10ad2\") " pod="openstack/cinder-a97d-account-create-update-9gppj" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.020789 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9498592a-bccd-4780-bee1-7bcf7ab10ad2-operator-scripts\") pod \"cinder-a97d-account-create-update-9gppj\" (UID: \"9498592a-bccd-4780-bee1-7bcf7ab10ad2\") " pod="openstack/cinder-a97d-account-create-update-9gppj" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.020839 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5sjk\" (UniqueName: \"kubernetes.io/projected/510cfe06-8c29-40c1-abb9-0290e4d93541-kube-api-access-l5sjk\") pod \"barbican-db-create-7wb47\" (UID: \"510cfe06-8c29-40c1-abb9-0290e4d93541\") " pod="openstack/barbican-db-create-7wb47" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.020891 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/510cfe06-8c29-40c1-abb9-0290e4d93541-operator-scripts\") pod \"barbican-db-create-7wb47\" (UID: \"510cfe06-8c29-40c1-abb9-0290e4d93541\") " pod="openstack/barbican-db-create-7wb47" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.022281 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9498592a-bccd-4780-bee1-7bcf7ab10ad2-operator-scripts\") pod \"cinder-a97d-account-create-update-9gppj\" (UID: \"9498592a-bccd-4780-bee1-7bcf7ab10ad2\") " pod="openstack/cinder-a97d-account-create-update-9gppj" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.037477 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-b8klg"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.048047 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.064024 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.064288 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.064497 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.064687 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7vv8q" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.075843 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfklg\" (UniqueName: \"kubernetes.io/projected/9498592a-bccd-4780-bee1-7bcf7ab10ad2-kube-api-access-mfklg\") pod \"cinder-a97d-account-create-update-9gppj\" (UID: \"9498592a-bccd-4780-bee1-7bcf7ab10ad2\") " pod="openstack/cinder-a97d-account-create-update-9gppj" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.079467 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a97d-account-create-update-9gppj" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.122634 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/510cfe06-8c29-40c1-abb9-0290e4d93541-operator-scripts\") pod \"barbican-db-create-7wb47\" (UID: \"510cfe06-8c29-40c1-abb9-0290e4d93541\") " pod="openstack/barbican-db-create-7wb47" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.122806 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-config-data\") pod \"keystone-db-sync-b8klg\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.122869 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-combined-ca-bundle\") pod \"keystone-db-sync-b8klg\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.122899 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbhpz\" (UniqueName: \"kubernetes.io/projected/0842a785-6944-4bb8-8c72-65aa4b098128-kube-api-access-pbhpz\") pod \"keystone-db-sync-b8klg\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.123025 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5sjk\" (UniqueName: \"kubernetes.io/projected/510cfe06-8c29-40c1-abb9-0290e4d93541-kube-api-access-l5sjk\") pod \"barbican-db-create-7wb47\" (UID: \"510cfe06-8c29-40c1-abb9-0290e4d93541\") " pod="openstack/barbican-db-create-7wb47" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.124942 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/510cfe06-8c29-40c1-abb9-0290e4d93541-operator-scripts\") pod \"barbican-db-create-7wb47\" (UID: \"510cfe06-8c29-40c1-abb9-0290e4d93541\") " pod="openstack/barbican-db-create-7wb47" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.127412 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-b8klg"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.207415 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5sjk\" (UniqueName: \"kubernetes.io/projected/510cfe06-8c29-40c1-abb9-0290e4d93541-kube-api-access-l5sjk\") pod \"barbican-db-create-7wb47\" (UID: \"510cfe06-8c29-40c1-abb9-0290e4d93541\") " pod="openstack/barbican-db-create-7wb47" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.224876 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-config-data\") pod \"keystone-db-sync-b8klg\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.227133 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-combined-ca-bundle\") pod \"keystone-db-sync-b8klg\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.227256 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbhpz\" (UniqueName: \"kubernetes.io/projected/0842a785-6944-4bb8-8c72-65aa4b098128-kube-api-access-pbhpz\") pod \"keystone-db-sync-b8klg\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.238776 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-combined-ca-bundle\") pod \"keystone-db-sync-b8klg\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.263827 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-config-data\") pod \"keystone-db-sync-b8klg\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.301306 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7wb47" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.315578 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbhpz\" (UniqueName: \"kubernetes.io/projected/0842a785-6944-4bb8-8c72-65aa4b098128-kube-api-access-pbhpz\") pod \"keystone-db-sync-b8klg\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.330011 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fn7m9"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.345031 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fn7m9" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.363443 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fn7m9"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.431576 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2vqh\" (UniqueName: \"kubernetes.io/projected/67344a8d-c26c-483f-b974-da997583505e-kube-api-access-t2vqh\") pod \"neutron-db-create-fn7m9\" (UID: \"67344a8d-c26c-483f-b974-da997583505e\") " pod="openstack/neutron-db-create-fn7m9" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.431671 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67344a8d-c26c-483f-b974-da997583505e-operator-scripts\") pod \"neutron-db-create-fn7m9\" (UID: \"67344a8d-c26c-483f-b974-da997583505e\") " pod="openstack/neutron-db-create-fn7m9" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.507791 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-e256-account-create-update-clbl4"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.509004 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e256-account-create-update-clbl4" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.519240 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.533748 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67344a8d-c26c-483f-b974-da997583505e-operator-scripts\") pod \"neutron-db-create-fn7m9\" (UID: \"67344a8d-c26c-483f-b974-da997583505e\") " pod="openstack/neutron-db-create-fn7m9" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.533909 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2vqh\" (UniqueName: \"kubernetes.io/projected/67344a8d-c26c-483f-b974-da997583505e-kube-api-access-t2vqh\") pod \"neutron-db-create-fn7m9\" (UID: \"67344a8d-c26c-483f-b974-da997583505e\") " pod="openstack/neutron-db-create-fn7m9" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.534965 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67344a8d-c26c-483f-b974-da997583505e-operator-scripts\") pod \"neutron-db-create-fn7m9\" (UID: \"67344a8d-c26c-483f-b974-da997583505e\") " pod="openstack/neutron-db-create-fn7m9" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.543018 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.571410 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e256-account-create-update-clbl4"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.582237 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-759b-account-create-update-xd2c6"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.585132 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-759b-account-create-update-xd2c6" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.611284 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.635721 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2vqh\" (UniqueName: \"kubernetes.io/projected/67344a8d-c26c-483f-b974-da997583505e-kube-api-access-t2vqh\") pod \"neutron-db-create-fn7m9\" (UID: \"67344a8d-c26c-483f-b974-da997583505e\") " pod="openstack/neutron-db-create-fn7m9" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.667000 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrxfd\" (UniqueName: \"kubernetes.io/projected/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-kube-api-access-hrxfd\") pod \"barbican-e256-account-create-update-clbl4\" (UID: \"dc042a5c-d892-4056-ba5f-28fbdeac4a5e\") " pod="openstack/barbican-e256-account-create-update-clbl4" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.667416 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-operator-scripts\") pod \"barbican-e256-account-create-update-clbl4\" (UID: \"dc042a5c-d892-4056-ba5f-28fbdeac4a5e\") " pod="openstack/barbican-e256-account-create-update-clbl4" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.757668 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fn7m9" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.778035 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-759b-account-create-update-xd2c6"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.818254 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrxfd\" (UniqueName: \"kubernetes.io/projected/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-kube-api-access-hrxfd\") pod \"barbican-e256-account-create-update-clbl4\" (UID: \"dc042a5c-d892-4056-ba5f-28fbdeac4a5e\") " pod="openstack/barbican-e256-account-create-update-clbl4" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.818989 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c27257c-47e3-46d2-9324-70c85dd9e6ed-operator-scripts\") pod \"neutron-759b-account-create-update-xd2c6\" (UID: \"2c27257c-47e3-46d2-9324-70c85dd9e6ed\") " pod="openstack/neutron-759b-account-create-update-xd2c6" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.819031 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-operator-scripts\") pod \"barbican-e256-account-create-update-clbl4\" (UID: \"dc042a5c-d892-4056-ba5f-28fbdeac4a5e\") " pod="openstack/barbican-e256-account-create-update-clbl4" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.819246 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872sg\" (UniqueName: \"kubernetes.io/projected/2c27257c-47e3-46d2-9324-70c85dd9e6ed-kube-api-access-872sg\") pod \"neutron-759b-account-create-update-xd2c6\" (UID: \"2c27257c-47e3-46d2-9324-70c85dd9e6ed\") " pod="openstack/neutron-759b-account-create-update-xd2c6" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.820852 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-operator-scripts\") pod \"barbican-e256-account-create-update-clbl4\" (UID: \"dc042a5c-d892-4056-ba5f-28fbdeac4a5e\") " pod="openstack/barbican-e256-account-create-update-clbl4" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.824680 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-czp2g"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.860421 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrxfd\" (UniqueName: \"kubernetes.io/projected/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-kube-api-access-hrxfd\") pod \"barbican-e256-account-create-update-clbl4\" (UID: \"dc042a5c-d892-4056-ba5f-28fbdeac4a5e\") " pod="openstack/barbican-e256-account-create-update-clbl4" Feb 16 10:06:23 crc kubenswrapper[4814]: W0216 10:06:23.862152 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b2e189e_8b3c_47a6_840f_9bca1dc9a429.slice/crio-e01bf7a327cb991a6f3fbce2f26400abab94b50a2889a26efa2b0dd8c7e5ea63 WatchSource:0}: Error finding container e01bf7a327cb991a6f3fbce2f26400abab94b50a2889a26efa2b0dd8c7e5ea63: Status 404 returned error can't find the container with id e01bf7a327cb991a6f3fbce2f26400abab94b50a2889a26efa2b0dd8c7e5ea63 Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.863109 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-t9fz6"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.867208 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.872617 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-tkr8k" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.872847 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.886274 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-t9fz6"] Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.903917 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e256-account-create-update-clbl4" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.943946 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c27257c-47e3-46d2-9324-70c85dd9e6ed-operator-scripts\") pod \"neutron-759b-account-create-update-xd2c6\" (UID: \"2c27257c-47e3-46d2-9324-70c85dd9e6ed\") " pod="openstack/neutron-759b-account-create-update-xd2c6" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.944174 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-872sg\" (UniqueName: \"kubernetes.io/projected/2c27257c-47e3-46d2-9324-70c85dd9e6ed-kube-api-access-872sg\") pod \"neutron-759b-account-create-update-xd2c6\" (UID: \"2c27257c-47e3-46d2-9324-70c85dd9e6ed\") " pod="openstack/neutron-759b-account-create-update-xd2c6" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.946152 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c27257c-47e3-46d2-9324-70c85dd9e6ed-operator-scripts\") pod \"neutron-759b-account-create-update-xd2c6\" (UID: \"2c27257c-47e3-46d2-9324-70c85dd9e6ed\") " pod="openstack/neutron-759b-account-create-update-xd2c6" Feb 16 10:06:23 crc kubenswrapper[4814]: I0216 10:06:23.973950 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-872sg\" (UniqueName: \"kubernetes.io/projected/2c27257c-47e3-46d2-9324-70c85dd9e6ed-kube-api-access-872sg\") pod \"neutron-759b-account-create-update-xd2c6\" (UID: \"2c27257c-47e3-46d2-9324-70c85dd9e6ed\") " pod="openstack/neutron-759b-account-create-update-xd2c6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.066955 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-config-data\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.067219 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-db-sync-config-data\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.067297 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-combined-ca-bundle\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.068872 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5k2t\" (UniqueName: \"kubernetes.io/projected/926559f6-8c52-4fdf-913e-2f2e43c4e409-kube-api-access-n5k2t\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.143410 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a97d-account-create-update-9gppj"] Feb 16 10:06:24 crc kubenswrapper[4814]: W0216 10:06:24.170048 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9498592a_bccd_4780_bee1_7bcf7ab10ad2.slice/crio-5dfeaf3d8643a67c594c67dfd5dc15dbd970e6bceae2c407a9be0f808f1d0b18 WatchSource:0}: Error finding container 5dfeaf3d8643a67c594c67dfd5dc15dbd970e6bceae2c407a9be0f808f1d0b18: Status 404 returned error can't find the container with id 5dfeaf3d8643a67c594c67dfd5dc15dbd970e6bceae2c407a9be0f808f1d0b18 Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.170853 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-db-sync-config-data\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.170905 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-combined-ca-bundle\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.170959 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5k2t\" (UniqueName: \"kubernetes.io/projected/926559f6-8c52-4fdf-913e-2f2e43c4e409-kube-api-access-n5k2t\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.171013 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-config-data\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.183841 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-config-data\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.215634 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.336643 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7wb47"] Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.379301 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-b8klg"] Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.407274 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9cd786565-5w9lt"] Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.408409 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" podUID="22837145-ddd2-4606-bc52-d633720bdeb2" containerName="dnsmasq-dns" containerID="cri-o://867ba8ec1f5475ac03128d2fb1f5aa259505ad3f36d2e25c88b7f2aca22642ab" gracePeriod=10 Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.478432 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-db-sync-config-data\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.482481 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-combined-ca-bundle\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.484097 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5k2t\" (UniqueName: \"kubernetes.io/projected/926559f6-8c52-4fdf-913e-2f2e43c4e409-kube-api-access-n5k2t\") pod \"watcher-db-sync-t9fz6\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.556820 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-759b-account-create-update-xd2c6" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.563493 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-88dqn" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.580285 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlpzw\" (UniqueName: \"kubernetes.io/projected/ac41f60a-214e-4093-ae06-4491ce820f53-kube-api-access-hlpzw\") pod \"ac41f60a-214e-4093-ae06-4491ce820f53\" (UID: \"ac41f60a-214e-4093-ae06-4491ce820f53\") " Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.580751 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac41f60a-214e-4093-ae06-4491ce820f53-operator-scripts\") pod \"ac41f60a-214e-4093-ae06-4491ce820f53\" (UID: \"ac41f60a-214e-4093-ae06-4491ce820f53\") " Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.584860 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac41f60a-214e-4093-ae06-4491ce820f53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac41f60a-214e-4093-ae06-4491ce820f53" (UID: "ac41f60a-214e-4093-ae06-4491ce820f53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.622163 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac41f60a-214e-4093-ae06-4491ce820f53-kube-api-access-hlpzw" (OuterVolumeSpecName: "kube-api-access-hlpzw") pod "ac41f60a-214e-4093-ae06-4491ce820f53" (UID: "ac41f60a-214e-4093-ae06-4491ce820f53"). InnerVolumeSpecName "kube-api-access-hlpzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.637713 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fn7m9"] Feb 16 10:06:24 crc kubenswrapper[4814]: W0216 10:06:24.646166 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc042a5c_d892_4056_ba5f_28fbdeac4a5e.slice/crio-a92d888584a515d80c0f9e02ef05215be8bbbf93e7d9a2a766d6aaaa6d49ff4a WatchSource:0}: Error finding container a92d888584a515d80c0f9e02ef05215be8bbbf93e7d9a2a766d6aaaa6d49ff4a: Status 404 returned error can't find the container with id a92d888584a515d80c0f9e02ef05215be8bbbf93e7d9a2a766d6aaaa6d49ff4a Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.664337 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e256-account-create-update-clbl4"] Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.689741 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac41f60a-214e-4093-ae06-4491ce820f53-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.690440 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlpzw\" (UniqueName: \"kubernetes.io/projected/ac41f60a-214e-4093-ae06-4491ce820f53-kube-api-access-hlpzw\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.706429 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e256-account-create-update-clbl4" event={"ID":"dc042a5c-d892-4056-ba5f-28fbdeac4a5e","Type":"ContainerStarted","Data":"a92d888584a515d80c0f9e02ef05215be8bbbf93e7d9a2a766d6aaaa6d49ff4a"} Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.707308 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-czp2g" event={"ID":"0b2e189e-8b3c-47a6-840f-9bca1dc9a429","Type":"ContainerStarted","Data":"e01bf7a327cb991a6f3fbce2f26400abab94b50a2889a26efa2b0dd8c7e5ea63"} Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.711561 4814 generic.go:334] "Generic (PLEG): container finished" podID="22837145-ddd2-4606-bc52-d633720bdeb2" containerID="867ba8ec1f5475ac03128d2fb1f5aa259505ad3f36d2e25c88b7f2aca22642ab" exitCode=0 Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.711615 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" event={"ID":"22837145-ddd2-4606-bc52-d633720bdeb2","Type":"ContainerDied","Data":"867ba8ec1f5475ac03128d2fb1f5aa259505ad3f36d2e25c88b7f2aca22642ab"} Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.713135 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8klg" event={"ID":"0842a785-6944-4bb8-8c72-65aa4b098128","Type":"ContainerStarted","Data":"ab24c44ee772c063943c36561f0ceea17cd782291cc2e9610aaff3c0aff118d2"} Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.722263 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7wb47" event={"ID":"510cfe06-8c29-40c1-abb9-0290e4d93541","Type":"ContainerStarted","Data":"5b8f009cde8de88d145300560ea0a376653ec763dee7a16df81bde226aaea72a"} Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.730305 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-88dqn" event={"ID":"ac41f60a-214e-4093-ae06-4491ce820f53","Type":"ContainerDied","Data":"dd8357ff2ec3860d7d124efe1d31be7cac1e5eeb1e7ff4477108cb9a08efdbfc"} Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.730357 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd8357ff2ec3860d7d124efe1d31be7cac1e5eeb1e7ff4477108cb9a08efdbfc" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.730434 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-88dqn" Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.756891 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a97d-account-create-update-9gppj" event={"ID":"9498592a-bccd-4780-bee1-7bcf7ab10ad2","Type":"ContainerStarted","Data":"5dfeaf3d8643a67c594c67dfd5dc15dbd970e6bceae2c407a9be0f808f1d0b18"} Feb 16 10:06:24 crc kubenswrapper[4814]: I0216 10:06:24.766080 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fn7m9" event={"ID":"67344a8d-c26c-483f-b974-da997583505e","Type":"ContainerStarted","Data":"a093151c58334087ba1d021ca2e21860131b633eb15ab132e7b16cc4bee3b76c"} Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.000900 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.272319 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.368396 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-759b-account-create-update-xd2c6"] Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.425921 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-config\") pod \"22837145-ddd2-4606-bc52-d633720bdeb2\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.425985 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-dns-svc\") pod \"22837145-ddd2-4606-bc52-d633720bdeb2\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.426117 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-sb\") pod \"22837145-ddd2-4606-bc52-d633720bdeb2\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.426157 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-nb\") pod \"22837145-ddd2-4606-bc52-d633720bdeb2\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.426231 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfcmw\" (UniqueName: \"kubernetes.io/projected/22837145-ddd2-4606-bc52-d633720bdeb2-kube-api-access-dfcmw\") pod \"22837145-ddd2-4606-bc52-d633720bdeb2\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.437286 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22837145-ddd2-4606-bc52-d633720bdeb2-kube-api-access-dfcmw" (OuterVolumeSpecName: "kube-api-access-dfcmw") pod "22837145-ddd2-4606-bc52-d633720bdeb2" (UID: "22837145-ddd2-4606-bc52-d633720bdeb2"). InnerVolumeSpecName "kube-api-access-dfcmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.557412 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "22837145-ddd2-4606-bc52-d633720bdeb2" (UID: "22837145-ddd2-4606-bc52-d633720bdeb2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.606201 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "22837145-ddd2-4606-bc52-d633720bdeb2" (UID: "22837145-ddd2-4606-bc52-d633720bdeb2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.619794 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.619860 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfcmw\" (UniqueName: \"kubernetes.io/projected/22837145-ddd2-4606-bc52-d633720bdeb2-kube-api-access-dfcmw\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.619877 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:25 crc kubenswrapper[4814]: E0216 10:06:25.636298 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-config podName:22837145-ddd2-4606-bc52-d633720bdeb2 nodeName:}" failed. No retries permitted until 2026-02-16 10:06:26.136254006 +0000 UTC m=+1243.829410186 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-config") pod "22837145-ddd2-4606-bc52-d633720bdeb2" (UID: "22837145-ddd2-4606-bc52-d633720bdeb2") : error deleting /var/lib/kubelet/pods/22837145-ddd2-4606-bc52-d633720bdeb2/volume-subpaths: remove /var/lib/kubelet/pods/22837145-ddd2-4606-bc52-d633720bdeb2/volume-subpaths: no such file or directory Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.636651 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "22837145-ddd2-4606-bc52-d633720bdeb2" (UID: "22837145-ddd2-4606-bc52-d633720bdeb2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.717154 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-t9fz6"] Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.722813 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.806732 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t9fz6" event={"ID":"926559f6-8c52-4fdf-913e-2f2e43c4e409","Type":"ContainerStarted","Data":"4b26b35bab36790ec32cdc25e139f4d6ebf4d8bfec3729fb95aa01da39677382"} Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.815694 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7wb47" event={"ID":"510cfe06-8c29-40c1-abb9-0290e4d93541","Type":"ContainerStarted","Data":"79550fdadfd7074925ecf632f7593d1c9b4f3229bccf55514e961b5162704f80"} Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.830388 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fn7m9" event={"ID":"67344a8d-c26c-483f-b974-da997583505e","Type":"ContainerStarted","Data":"7180e58b05cce41ac45579d89f3adee4f78c3574c740a4e0e55aaa57a7f36d3c"} Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.842306 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-7wb47" podStartSLOduration=3.842276931 podStartE2EDuration="3.842276931s" podCreationTimestamp="2026-02-16 10:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:06:25.838654481 +0000 UTC m=+1243.531810661" watchObservedRunningTime="2026-02-16 10:06:25.842276931 +0000 UTC m=+1243.535433111" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.844239 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-czp2g" event={"ID":"0b2e189e-8b3c-47a6-840f-9bca1dc9a429","Type":"ContainerStarted","Data":"1ff50a2aa5b814bf432baf20cc7d53aad76ed22c0e4fc31d3537ab7640253902"} Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.863407 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" event={"ID":"22837145-ddd2-4606-bc52-d633720bdeb2","Type":"ContainerDied","Data":"26d79121d61b7133ab8c146fe5fe9f0decd447a4ac39c796e28f970730cc3fde"} Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.863720 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9cd786565-5w9lt" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.865457 4814 scope.go:117] "RemoveContainer" containerID="867ba8ec1f5475ac03128d2fb1f5aa259505ad3f36d2e25c88b7f2aca22642ab" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.867169 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-fn7m9" podStartSLOduration=2.867144918 podStartE2EDuration="2.867144918s" podCreationTimestamp="2026-02-16 10:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:06:25.864442833 +0000 UTC m=+1243.557599013" watchObservedRunningTime="2026-02-16 10:06:25.867144918 +0000 UTC m=+1243.560301098" Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.882272 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-759b-account-create-update-xd2c6" event={"ID":"2c27257c-47e3-46d2-9324-70c85dd9e6ed","Type":"ContainerStarted","Data":"720ef04cd28e25a89187fb058bc2ae034eb1febc712fba2167a93b9ae3c6c891"} Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.888994 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a97d-account-create-update-9gppj" event={"ID":"9498592a-bccd-4780-bee1-7bcf7ab10ad2","Type":"ContainerStarted","Data":"f2826ac76e48f667b92e969f95a7eea75a364665640ebc68b931914daa23173f"} Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.905556 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89","Type":"ContainerStarted","Data":"9ebe37a2ee5841a2cc73b559f42a4ac9dcd8d56f4df6d8ac270676c5516ef688"} Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.912976 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e256-account-create-update-clbl4" event={"ID":"dc042a5c-d892-4056-ba5f-28fbdeac4a5e","Type":"ContainerStarted","Data":"828e08bacf56261142c80fd0af11ca4b7d35bf37dd30201086fde931b8b62b80"} Feb 16 10:06:25 crc kubenswrapper[4814]: I0216 10:06:25.927297 4814 scope.go:117] "RemoveContainer" containerID="660f63563650a18fc7c3da50f29f077a141a28e2475b13c81a202bd978eae756" Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.006534 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-e256-account-create-update-clbl4" podStartSLOduration=3.00650972 podStartE2EDuration="3.00650972s" podCreationTimestamp="2026-02-16 10:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:06:25.996785791 +0000 UTC m=+1243.689941961" watchObservedRunningTime="2026-02-16 10:06:26.00650972 +0000 UTC m=+1243.699665900" Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.006996 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-759b-account-create-update-xd2c6" podStartSLOduration=3.006991193 podStartE2EDuration="3.006991193s" podCreationTimestamp="2026-02-16 10:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:06:25.978323351 +0000 UTC m=+1243.671479531" watchObservedRunningTime="2026-02-16 10:06:26.006991193 +0000 UTC m=+1243.700147373" Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.136478 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-config\") pod \"22837145-ddd2-4606-bc52-d633720bdeb2\" (UID: \"22837145-ddd2-4606-bc52-d633720bdeb2\") " Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.137163 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-config" (OuterVolumeSpecName: "config") pod "22837145-ddd2-4606-bc52-d633720bdeb2" (UID: "22837145-ddd2-4606-bc52-d633720bdeb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.137711 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22837145-ddd2-4606-bc52-d633720bdeb2-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.297954 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9cd786565-5w9lt"] Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.311409 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9cd786565-5w9lt"] Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.933910 4814 generic.go:334] "Generic (PLEG): container finished" podID="2c27257c-47e3-46d2-9324-70c85dd9e6ed" containerID="ba7e16c7dd5560fea3213bd4c30db64895628c58cdcc55a9ef477b79c62dd555" exitCode=0 Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.934006 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-759b-account-create-update-xd2c6" event={"ID":"2c27257c-47e3-46d2-9324-70c85dd9e6ed","Type":"ContainerDied","Data":"ba7e16c7dd5560fea3213bd4c30db64895628c58cdcc55a9ef477b79c62dd555"} Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.938665 4814 generic.go:334] "Generic (PLEG): container finished" podID="510cfe06-8c29-40c1-abb9-0290e4d93541" containerID="79550fdadfd7074925ecf632f7593d1c9b4f3229bccf55514e961b5162704f80" exitCode=0 Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.938870 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7wb47" event={"ID":"510cfe06-8c29-40c1-abb9-0290e4d93541","Type":"ContainerDied","Data":"79550fdadfd7074925ecf632f7593d1c9b4f3229bccf55514e961b5162704f80"} Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.940441 4814 generic.go:334] "Generic (PLEG): container finished" podID="9498592a-bccd-4780-bee1-7bcf7ab10ad2" containerID="f2826ac76e48f667b92e969f95a7eea75a364665640ebc68b931914daa23173f" exitCode=0 Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.940514 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a97d-account-create-update-9gppj" event={"ID":"9498592a-bccd-4780-bee1-7bcf7ab10ad2","Type":"ContainerDied","Data":"f2826ac76e48f667b92e969f95a7eea75a364665640ebc68b931914daa23173f"} Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.944394 4814 generic.go:334] "Generic (PLEG): container finished" podID="67344a8d-c26c-483f-b974-da997583505e" containerID="7180e58b05cce41ac45579d89f3adee4f78c3574c740a4e0e55aaa57a7f36d3c" exitCode=0 Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.944495 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fn7m9" event={"ID":"67344a8d-c26c-483f-b974-da997583505e","Type":"ContainerDied","Data":"7180e58b05cce41ac45579d89f3adee4f78c3574c740a4e0e55aaa57a7f36d3c"} Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.946228 4814 generic.go:334] "Generic (PLEG): container finished" podID="dc042a5c-d892-4056-ba5f-28fbdeac4a5e" containerID="828e08bacf56261142c80fd0af11ca4b7d35bf37dd30201086fde931b8b62b80" exitCode=0 Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.946271 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e256-account-create-update-clbl4" event={"ID":"dc042a5c-d892-4056-ba5f-28fbdeac4a5e","Type":"ContainerDied","Data":"828e08bacf56261142c80fd0af11ca4b7d35bf37dd30201086fde931b8b62b80"} Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.952701 4814 generic.go:334] "Generic (PLEG): container finished" podID="0b2e189e-8b3c-47a6-840f-9bca1dc9a429" containerID="1ff50a2aa5b814bf432baf20cc7d53aad76ed22c0e4fc31d3537ab7640253902" exitCode=0 Feb 16 10:06:26 crc kubenswrapper[4814]: I0216 10:06:26.952766 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-czp2g" event={"ID":"0b2e189e-8b3c-47a6-840f-9bca1dc9a429","Type":"ContainerDied","Data":"1ff50a2aa5b814bf432baf20cc7d53aad76ed22c0e4fc31d3537ab7640253902"} Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.028670 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22837145-ddd2-4606-bc52-d633720bdeb2" path="/var/lib/kubelet/pods/22837145-ddd2-4606-bc52-d633720bdeb2/volumes" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.388288 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a97d-account-create-update-9gppj" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.395447 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-czp2g" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.468716 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fqkm\" (UniqueName: \"kubernetes.io/projected/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-kube-api-access-6fqkm\") pod \"0b2e189e-8b3c-47a6-840f-9bca1dc9a429\" (UID: \"0b2e189e-8b3c-47a6-840f-9bca1dc9a429\") " Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.468838 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-operator-scripts\") pod \"0b2e189e-8b3c-47a6-840f-9bca1dc9a429\" (UID: \"0b2e189e-8b3c-47a6-840f-9bca1dc9a429\") " Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.468877 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9498592a-bccd-4780-bee1-7bcf7ab10ad2-operator-scripts\") pod \"9498592a-bccd-4780-bee1-7bcf7ab10ad2\" (UID: \"9498592a-bccd-4780-bee1-7bcf7ab10ad2\") " Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.468955 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfklg\" (UniqueName: \"kubernetes.io/projected/9498592a-bccd-4780-bee1-7bcf7ab10ad2-kube-api-access-mfklg\") pod \"9498592a-bccd-4780-bee1-7bcf7ab10ad2\" (UID: \"9498592a-bccd-4780-bee1-7bcf7ab10ad2\") " Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.470613 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b2e189e-8b3c-47a6-840f-9bca1dc9a429" (UID: "0b2e189e-8b3c-47a6-840f-9bca1dc9a429"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.471271 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9498592a-bccd-4780-bee1-7bcf7ab10ad2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9498592a-bccd-4780-bee1-7bcf7ab10ad2" (UID: "9498592a-bccd-4780-bee1-7bcf7ab10ad2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.495074 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9498592a-bccd-4780-bee1-7bcf7ab10ad2-kube-api-access-mfklg" (OuterVolumeSpecName: "kube-api-access-mfklg") pod "9498592a-bccd-4780-bee1-7bcf7ab10ad2" (UID: "9498592a-bccd-4780-bee1-7bcf7ab10ad2"). InnerVolumeSpecName "kube-api-access-mfklg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.500302 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-kube-api-access-6fqkm" (OuterVolumeSpecName: "kube-api-access-6fqkm") pod "0b2e189e-8b3c-47a6-840f-9bca1dc9a429" (UID: "0b2e189e-8b3c-47a6-840f-9bca1dc9a429"). InnerVolumeSpecName "kube-api-access-6fqkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.573406 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fqkm\" (UniqueName: \"kubernetes.io/projected/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-kube-api-access-6fqkm\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.573484 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b2e189e-8b3c-47a6-840f-9bca1dc9a429-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.573498 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9498592a-bccd-4780-bee1-7bcf7ab10ad2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.573513 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfklg\" (UniqueName: \"kubernetes.io/projected/9498592a-bccd-4780-bee1-7bcf7ab10ad2-kube-api-access-mfklg\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.970922 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a97d-account-create-update-9gppj" event={"ID":"9498592a-bccd-4780-bee1-7bcf7ab10ad2","Type":"ContainerDied","Data":"5dfeaf3d8643a67c594c67dfd5dc15dbd970e6bceae2c407a9be0f808f1d0b18"} Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.970982 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dfeaf3d8643a67c594c67dfd5dc15dbd970e6bceae2c407a9be0f808f1d0b18" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.970993 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a97d-account-create-update-9gppj" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.975173 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-czp2g" Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.977017 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-czp2g" event={"ID":"0b2e189e-8b3c-47a6-840f-9bca1dc9a429","Type":"ContainerDied","Data":"e01bf7a327cb991a6f3fbce2f26400abab94b50a2889a26efa2b0dd8c7e5ea63"} Feb 16 10:06:27 crc kubenswrapper[4814]: I0216 10:06:27.977071 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e01bf7a327cb991a6f3fbce2f26400abab94b50a2889a26efa2b0dd8c7e5ea63" Feb 16 10:06:28 crc kubenswrapper[4814]: I0216 10:06:28.392552 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7wb47" Feb 16 10:06:28 crc kubenswrapper[4814]: I0216 10:06:28.497980 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/510cfe06-8c29-40c1-abb9-0290e4d93541-operator-scripts\") pod \"510cfe06-8c29-40c1-abb9-0290e4d93541\" (UID: \"510cfe06-8c29-40c1-abb9-0290e4d93541\") " Feb 16 10:06:28 crc kubenswrapper[4814]: I0216 10:06:28.498046 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5sjk\" (UniqueName: \"kubernetes.io/projected/510cfe06-8c29-40c1-abb9-0290e4d93541-kube-api-access-l5sjk\") pod \"510cfe06-8c29-40c1-abb9-0290e4d93541\" (UID: \"510cfe06-8c29-40c1-abb9-0290e4d93541\") " Feb 16 10:06:28 crc kubenswrapper[4814]: I0216 10:06:28.498659 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/510cfe06-8c29-40c1-abb9-0290e4d93541-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "510cfe06-8c29-40c1-abb9-0290e4d93541" (UID: "510cfe06-8c29-40c1-abb9-0290e4d93541"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:28 crc kubenswrapper[4814]: I0216 10:06:28.499329 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/510cfe06-8c29-40c1-abb9-0290e4d93541-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:28 crc kubenswrapper[4814]: I0216 10:06:28.502516 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510cfe06-8c29-40c1-abb9-0290e4d93541-kube-api-access-l5sjk" (OuterVolumeSpecName: "kube-api-access-l5sjk") pod "510cfe06-8c29-40c1-abb9-0290e4d93541" (UID: "510cfe06-8c29-40c1-abb9-0290e4d93541"). InnerVolumeSpecName "kube-api-access-l5sjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:28 crc kubenswrapper[4814]: I0216 10:06:28.601339 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5sjk\" (UniqueName: \"kubernetes.io/projected/510cfe06-8c29-40c1-abb9-0290e4d93541-kube-api-access-l5sjk\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:29 crc kubenswrapper[4814]: I0216 10:06:29.010568 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7wb47" Feb 16 10:06:29 crc kubenswrapper[4814]: I0216 10:06:29.025155 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7wb47" event={"ID":"510cfe06-8c29-40c1-abb9-0290e4d93541","Type":"ContainerDied","Data":"5b8f009cde8de88d145300560ea0a376653ec763dee7a16df81bde226aaea72a"} Feb 16 10:06:29 crc kubenswrapper[4814]: I0216 10:06:29.025216 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8f009cde8de88d145300560ea0a376653ec763dee7a16df81bde226aaea72a" Feb 16 10:06:32 crc kubenswrapper[4814]: I0216 10:06:32.047646 4814 generic.go:334] "Generic (PLEG): container finished" podID="d64fe4ad-1b8d-4f94-b825-675bb6bd7f89" containerID="9ebe37a2ee5841a2cc73b559f42a4ac9dcd8d56f4df6d8ac270676c5516ef688" exitCode=0 Feb 16 10:06:32 crc kubenswrapper[4814]: I0216 10:06:32.049333 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89","Type":"ContainerDied","Data":"9ebe37a2ee5841a2cc73b559f42a4ac9dcd8d56f4df6d8ac270676c5516ef688"} Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.749705 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fn7m9" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.758472 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e256-account-create-update-clbl4" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.759962 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-759b-account-create-update-xd2c6" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.782026 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrxfd\" (UniqueName: \"kubernetes.io/projected/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-kube-api-access-hrxfd\") pod \"dc042a5c-d892-4056-ba5f-28fbdeac4a5e\" (UID: \"dc042a5c-d892-4056-ba5f-28fbdeac4a5e\") " Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.782218 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2vqh\" (UniqueName: \"kubernetes.io/projected/67344a8d-c26c-483f-b974-da997583505e-kube-api-access-t2vqh\") pod \"67344a8d-c26c-483f-b974-da997583505e\" (UID: \"67344a8d-c26c-483f-b974-da997583505e\") " Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.782433 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-872sg\" (UniqueName: \"kubernetes.io/projected/2c27257c-47e3-46d2-9324-70c85dd9e6ed-kube-api-access-872sg\") pod \"2c27257c-47e3-46d2-9324-70c85dd9e6ed\" (UID: \"2c27257c-47e3-46d2-9324-70c85dd9e6ed\") " Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.782478 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c27257c-47e3-46d2-9324-70c85dd9e6ed-operator-scripts\") pod \"2c27257c-47e3-46d2-9324-70c85dd9e6ed\" (UID: \"2c27257c-47e3-46d2-9324-70c85dd9e6ed\") " Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.782648 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67344a8d-c26c-483f-b974-da997583505e-operator-scripts\") pod \"67344a8d-c26c-483f-b974-da997583505e\" (UID: \"67344a8d-c26c-483f-b974-da997583505e\") " Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.782694 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-operator-scripts\") pod \"dc042a5c-d892-4056-ba5f-28fbdeac4a5e\" (UID: \"dc042a5c-d892-4056-ba5f-28fbdeac4a5e\") " Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.785268 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67344a8d-c26c-483f-b974-da997583505e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67344a8d-c26c-483f-b974-da997583505e" (UID: "67344a8d-c26c-483f-b974-da997583505e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.785364 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c27257c-47e3-46d2-9324-70c85dd9e6ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c27257c-47e3-46d2-9324-70c85dd9e6ed" (UID: "2c27257c-47e3-46d2-9324-70c85dd9e6ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.785501 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc042a5c-d892-4056-ba5f-28fbdeac4a5e" (UID: "dc042a5c-d892-4056-ba5f-28fbdeac4a5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.789157 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67344a8d-c26c-483f-b974-da997583505e-kube-api-access-t2vqh" (OuterVolumeSpecName: "kube-api-access-t2vqh") pod "67344a8d-c26c-483f-b974-da997583505e" (UID: "67344a8d-c26c-483f-b974-da997583505e"). InnerVolumeSpecName "kube-api-access-t2vqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.798305 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c27257c-47e3-46d2-9324-70c85dd9e6ed-kube-api-access-872sg" (OuterVolumeSpecName: "kube-api-access-872sg") pod "2c27257c-47e3-46d2-9324-70c85dd9e6ed" (UID: "2c27257c-47e3-46d2-9324-70c85dd9e6ed"). InnerVolumeSpecName "kube-api-access-872sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.814258 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-kube-api-access-hrxfd" (OuterVolumeSpecName: "kube-api-access-hrxfd") pod "dc042a5c-d892-4056-ba5f-28fbdeac4a5e" (UID: "dc042a5c-d892-4056-ba5f-28fbdeac4a5e"). InnerVolumeSpecName "kube-api-access-hrxfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.885181 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67344a8d-c26c-483f-b974-da997583505e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.885591 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.885658 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrxfd\" (UniqueName: \"kubernetes.io/projected/dc042a5c-d892-4056-ba5f-28fbdeac4a5e-kube-api-access-hrxfd\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.885721 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2vqh\" (UniqueName: \"kubernetes.io/projected/67344a8d-c26c-483f-b974-da997583505e-kube-api-access-t2vqh\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.885776 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-872sg\" (UniqueName: \"kubernetes.io/projected/2c27257c-47e3-46d2-9324-70c85dd9e6ed-kube-api-access-872sg\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:36 crc kubenswrapper[4814]: I0216 10:06:36.885825 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c27257c-47e3-46d2-9324-70c85dd9e6ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.107111 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fn7m9" event={"ID":"67344a8d-c26c-483f-b974-da997583505e","Type":"ContainerDied","Data":"a093151c58334087ba1d021ca2e21860131b633eb15ab132e7b16cc4bee3b76c"} Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.107167 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a093151c58334087ba1d021ca2e21860131b633eb15ab132e7b16cc4bee3b76c" Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.107248 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fn7m9" Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.114590 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e256-account-create-update-clbl4" event={"ID":"dc042a5c-d892-4056-ba5f-28fbdeac4a5e","Type":"ContainerDied","Data":"a92d888584a515d80c0f9e02ef05215be8bbbf93e7d9a2a766d6aaaa6d49ff4a"} Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.114661 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a92d888584a515d80c0f9e02ef05215be8bbbf93e7d9a2a766d6aaaa6d49ff4a" Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.114617 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e256-account-create-update-clbl4" Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.118683 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-759b-account-create-update-xd2c6" event={"ID":"2c27257c-47e3-46d2-9324-70c85dd9e6ed","Type":"ContainerDied","Data":"720ef04cd28e25a89187fb058bc2ae034eb1febc712fba2167a93b9ae3c6c891"} Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.118744 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="720ef04cd28e25a89187fb058bc2ae034eb1febc712fba2167a93b9ae3c6c891" Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.118809 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-759b-account-create-update-xd2c6" Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.960295 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.960846 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.960917 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.961862 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7db1806bf7a6e5cd75b04a931b3fd46bd321177245f8fbccf4bd3b036932bbf"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 10:06:37 crc kubenswrapper[4814]: I0216 10:06:37.961929 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://c7db1806bf7a6e5cd75b04a931b3fd46bd321177245f8fbccf4bd3b036932bbf" gracePeriod=600 Feb 16 10:06:38 crc kubenswrapper[4814]: I0216 10:06:38.132783 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89","Type":"ContainerStarted","Data":"1e98322e5c161337d5e48f4b90c20073de9f62c015d45834700906505b7101af"} Feb 16 10:06:38 crc kubenswrapper[4814]: I0216 10:06:38.136212 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="c7db1806bf7a6e5cd75b04a931b3fd46bd321177245f8fbccf4bd3b036932bbf" exitCode=0 Feb 16 10:06:38 crc kubenswrapper[4814]: I0216 10:06:38.136276 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"c7db1806bf7a6e5cd75b04a931b3fd46bd321177245f8fbccf4bd3b036932bbf"} Feb 16 10:06:38 crc kubenswrapper[4814]: I0216 10:06:38.136313 4814 scope.go:117] "RemoveContainer" containerID="5b20fcb56d62b3faba2758b4da10c035a51c1093d8bbea8f8006bcade37f9f53" Feb 16 10:06:38 crc kubenswrapper[4814]: I0216 10:06:38.139771 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t9fz6" event={"ID":"926559f6-8c52-4fdf-913e-2f2e43c4e409","Type":"ContainerStarted","Data":"73e75bbae7f45019aed7ef9b3c95224eadcbced429b89546248cbd68f05d9c9f"} Feb 16 10:06:38 crc kubenswrapper[4814]: I0216 10:06:38.142242 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8klg" event={"ID":"0842a785-6944-4bb8-8c72-65aa4b098128","Type":"ContainerStarted","Data":"bdac0bf1c4f3f96a8e58083e6865222f820fb309d4fa7b459bb998ce1b75da70"} Feb 16 10:06:38 crc kubenswrapper[4814]: I0216 10:06:38.181952 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-b8klg" podStartSLOduration=3.353233585 podStartE2EDuration="16.181665354s" podCreationTimestamp="2026-02-16 10:06:22 +0000 UTC" firstStartedPulling="2026-02-16 10:06:24.496181338 +0000 UTC m=+1242.189337518" lastFinishedPulling="2026-02-16 10:06:37.324613107 +0000 UTC m=+1255.017769287" observedRunningTime="2026-02-16 10:06:38.17937978 +0000 UTC m=+1255.872535960" watchObservedRunningTime="2026-02-16 10:06:38.181665354 +0000 UTC m=+1255.874821534" Feb 16 10:06:38 crc kubenswrapper[4814]: I0216 10:06:38.182146 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-t9fz6" podStartSLOduration=3.164261489 podStartE2EDuration="15.182140087s" podCreationTimestamp="2026-02-16 10:06:23 +0000 UTC" firstStartedPulling="2026-02-16 10:06:25.716643488 +0000 UTC m=+1243.409799668" lastFinishedPulling="2026-02-16 10:06:37.734522086 +0000 UTC m=+1255.427678266" observedRunningTime="2026-02-16 10:06:38.162091393 +0000 UTC m=+1255.855247573" watchObservedRunningTime="2026-02-16 10:06:38.182140087 +0000 UTC m=+1255.875296267" Feb 16 10:06:39 crc kubenswrapper[4814]: I0216 10:06:39.153721 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9znsh" event={"ID":"332682c6-8779-42d6-8445-1be863b81659","Type":"ContainerStarted","Data":"c61937128bff8df80b337778f162e754deeb832e6f35f9ef72e31ab3fe7a6c2d"} Feb 16 10:06:39 crc kubenswrapper[4814]: I0216 10:06:39.156915 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"d69efd8fe9b99e84b5f788c4ef81733d235dcbd9751322ed8d1ae82ada37f8b1"} Feb 16 10:06:39 crc kubenswrapper[4814]: I0216 10:06:39.174131 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9znsh" podStartSLOduration=3.713188399 podStartE2EDuration="41.174108522s" podCreationTimestamp="2026-02-16 10:05:58 +0000 UTC" firstStartedPulling="2026-02-16 10:06:00.251776459 +0000 UTC m=+1217.944932639" lastFinishedPulling="2026-02-16 10:06:37.712696582 +0000 UTC m=+1255.405852762" observedRunningTime="2026-02-16 10:06:39.170452721 +0000 UTC m=+1256.863608921" watchObservedRunningTime="2026-02-16 10:06:39.174108522 +0000 UTC m=+1256.867264702" Feb 16 10:06:42 crc kubenswrapper[4814]: I0216 10:06:42.192739 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89","Type":"ContainerStarted","Data":"525e5b246d9186943a3dc08e9c1eebbf2a186cb0ca5913a785185f8b933e13f3"} Feb 16 10:06:42 crc kubenswrapper[4814]: I0216 10:06:42.193709 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"d64fe4ad-1b8d-4f94-b825-675bb6bd7f89","Type":"ContainerStarted","Data":"fe867005a8e4e221f49b1feada965ccb5c87242f3b22edf563b9daae102a9863"} Feb 16 10:06:42 crc kubenswrapper[4814]: I0216 10:06:42.227974 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.22794537 podStartE2EDuration="22.22794537s" podCreationTimestamp="2026-02-16 10:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:06:42.221801941 +0000 UTC m=+1259.914958121" watchObservedRunningTime="2026-02-16 10:06:42.22794537 +0000 UTC m=+1259.921101550" Feb 16 10:06:44 crc kubenswrapper[4814]: I0216 10:06:44.215260 4814 generic.go:334] "Generic (PLEG): container finished" podID="926559f6-8c52-4fdf-913e-2f2e43c4e409" containerID="73e75bbae7f45019aed7ef9b3c95224eadcbced429b89546248cbd68f05d9c9f" exitCode=0 Feb 16 10:06:44 crc kubenswrapper[4814]: I0216 10:06:44.215362 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t9fz6" event={"ID":"926559f6-8c52-4fdf-913e-2f2e43c4e409","Type":"ContainerDied","Data":"73e75bbae7f45019aed7ef9b3c95224eadcbced429b89546248cbd68f05d9c9f"} Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.228644 4814 generic.go:334] "Generic (PLEG): container finished" podID="0842a785-6944-4bb8-8c72-65aa4b098128" containerID="bdac0bf1c4f3f96a8e58083e6865222f820fb309d4fa7b459bb998ce1b75da70" exitCode=0 Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.228876 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8klg" event={"ID":"0842a785-6944-4bb8-8c72-65aa4b098128","Type":"ContainerDied","Data":"bdac0bf1c4f3f96a8e58083e6865222f820fb309d4fa7b459bb998ce1b75da70"} Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.619595 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.703455 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-combined-ca-bundle\") pod \"926559f6-8c52-4fdf-913e-2f2e43c4e409\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.703641 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-db-sync-config-data\") pod \"926559f6-8c52-4fdf-913e-2f2e43c4e409\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.703718 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5k2t\" (UniqueName: \"kubernetes.io/projected/926559f6-8c52-4fdf-913e-2f2e43c4e409-kube-api-access-n5k2t\") pod \"926559f6-8c52-4fdf-913e-2f2e43c4e409\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.703813 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-config-data\") pod \"926559f6-8c52-4fdf-913e-2f2e43c4e409\" (UID: \"926559f6-8c52-4fdf-913e-2f2e43c4e409\") " Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.712103 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "926559f6-8c52-4fdf-913e-2f2e43c4e409" (UID: "926559f6-8c52-4fdf-913e-2f2e43c4e409"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.712137 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926559f6-8c52-4fdf-913e-2f2e43c4e409-kube-api-access-n5k2t" (OuterVolumeSpecName: "kube-api-access-n5k2t") pod "926559f6-8c52-4fdf-913e-2f2e43c4e409" (UID: "926559f6-8c52-4fdf-913e-2f2e43c4e409"). InnerVolumeSpecName "kube-api-access-n5k2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.737815 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "926559f6-8c52-4fdf-913e-2f2e43c4e409" (UID: "926559f6-8c52-4fdf-913e-2f2e43c4e409"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.762852 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-config-data" (OuterVolumeSpecName: "config-data") pod "926559f6-8c52-4fdf-913e-2f2e43c4e409" (UID: "926559f6-8c52-4fdf-913e-2f2e43c4e409"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.806101 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.806141 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.806155 4814 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/926559f6-8c52-4fdf-913e-2f2e43c4e409-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:45 crc kubenswrapper[4814]: I0216 10:06:45.806165 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5k2t\" (UniqueName: \"kubernetes.io/projected/926559f6-8c52-4fdf-913e-2f2e43c4e409-kube-api-access-n5k2t\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.071821 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.241151 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t9fz6" Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.241150 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t9fz6" event={"ID":"926559f6-8c52-4fdf-913e-2f2e43c4e409","Type":"ContainerDied","Data":"4b26b35bab36790ec32cdc25e139f4d6ebf4d8bfec3729fb95aa01da39677382"} Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.242066 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b26b35bab36790ec32cdc25e139f4d6ebf4d8bfec3729fb95aa01da39677382" Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.605600 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.729170 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-combined-ca-bundle\") pod \"0842a785-6944-4bb8-8c72-65aa4b098128\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.729290 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-config-data\") pod \"0842a785-6944-4bb8-8c72-65aa4b098128\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.729325 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbhpz\" (UniqueName: \"kubernetes.io/projected/0842a785-6944-4bb8-8c72-65aa4b098128-kube-api-access-pbhpz\") pod \"0842a785-6944-4bb8-8c72-65aa4b098128\" (UID: \"0842a785-6944-4bb8-8c72-65aa4b098128\") " Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.734671 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0842a785-6944-4bb8-8c72-65aa4b098128-kube-api-access-pbhpz" (OuterVolumeSpecName: "kube-api-access-pbhpz") pod "0842a785-6944-4bb8-8c72-65aa4b098128" (UID: "0842a785-6944-4bb8-8c72-65aa4b098128"). InnerVolumeSpecName "kube-api-access-pbhpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.760495 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0842a785-6944-4bb8-8c72-65aa4b098128" (UID: "0842a785-6944-4bb8-8c72-65aa4b098128"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.791703 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-config-data" (OuterVolumeSpecName: "config-data") pod "0842a785-6944-4bb8-8c72-65aa4b098128" (UID: "0842a785-6944-4bb8-8c72-65aa4b098128"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.831774 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.831820 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0842a785-6944-4bb8-8c72-65aa4b098128-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:46 crc kubenswrapper[4814]: I0216 10:06:46.831831 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbhpz\" (UniqueName: \"kubernetes.io/projected/0842a785-6944-4bb8-8c72-65aa4b098128-kube-api-access-pbhpz\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.252757 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-b8klg" event={"ID":"0842a785-6944-4bb8-8c72-65aa4b098128","Type":"ContainerDied","Data":"ab24c44ee772c063943c36561f0ceea17cd782291cc2e9610aaff3c0aff118d2"} Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.252824 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab24c44ee772c063943c36561f0ceea17cd782291cc2e9610aaff3c0aff118d2" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.252830 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-b8klg" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.548986 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-m6cfw"] Feb 16 10:06:47 crc kubenswrapper[4814]: E0216 10:06:47.550301 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc042a5c-d892-4056-ba5f-28fbdeac4a5e" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550327 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc042a5c-d892-4056-ba5f-28fbdeac4a5e" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: E0216 10:06:47.550340 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67344a8d-c26c-483f-b974-da997583505e" containerName="mariadb-database-create" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550349 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="67344a8d-c26c-483f-b974-da997583505e" containerName="mariadb-database-create" Feb 16 10:06:47 crc kubenswrapper[4814]: E0216 10:06:47.550361 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac41f60a-214e-4093-ae06-4491ce820f53" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550368 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac41f60a-214e-4093-ae06-4491ce820f53" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: E0216 10:06:47.550383 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2e189e-8b3c-47a6-840f-9bca1dc9a429" containerName="mariadb-database-create" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550389 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2e189e-8b3c-47a6-840f-9bca1dc9a429" containerName="mariadb-database-create" Feb 16 10:06:47 crc kubenswrapper[4814]: E0216 10:06:47.550397 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="926559f6-8c52-4fdf-913e-2f2e43c4e409" containerName="watcher-db-sync" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550403 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="926559f6-8c52-4fdf-913e-2f2e43c4e409" containerName="watcher-db-sync" Feb 16 10:06:47 crc kubenswrapper[4814]: E0216 10:06:47.550417 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22837145-ddd2-4606-bc52-d633720bdeb2" containerName="dnsmasq-dns" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550423 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="22837145-ddd2-4606-bc52-d633720bdeb2" containerName="dnsmasq-dns" Feb 16 10:06:47 crc kubenswrapper[4814]: E0216 10:06:47.550439 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9498592a-bccd-4780-bee1-7bcf7ab10ad2" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550447 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9498592a-bccd-4780-bee1-7bcf7ab10ad2" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: E0216 10:06:47.550458 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c27257c-47e3-46d2-9324-70c85dd9e6ed" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550467 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c27257c-47e3-46d2-9324-70c85dd9e6ed" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: E0216 10:06:47.550477 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22837145-ddd2-4606-bc52-d633720bdeb2" containerName="init" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550484 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="22837145-ddd2-4606-bc52-d633720bdeb2" containerName="init" Feb 16 10:06:47 crc kubenswrapper[4814]: E0216 10:06:47.550501 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="510cfe06-8c29-40c1-abb9-0290e4d93541" containerName="mariadb-database-create" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550506 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="510cfe06-8c29-40c1-abb9-0290e4d93541" containerName="mariadb-database-create" Feb 16 10:06:47 crc kubenswrapper[4814]: E0216 10:06:47.550518 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0842a785-6944-4bb8-8c72-65aa4b098128" containerName="keystone-db-sync" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550525 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="0842a785-6944-4bb8-8c72-65aa4b098128" containerName="keystone-db-sync" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550741 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="67344a8d-c26c-483f-b974-da997583505e" containerName="mariadb-database-create" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550774 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2e189e-8b3c-47a6-840f-9bca1dc9a429" containerName="mariadb-database-create" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550814 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc042a5c-d892-4056-ba5f-28fbdeac4a5e" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550823 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9498592a-bccd-4780-bee1-7bcf7ab10ad2" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550836 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="0842a785-6944-4bb8-8c72-65aa4b098128" containerName="keystone-db-sync" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550845 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac41f60a-214e-4093-ae06-4491ce820f53" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550865 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="926559f6-8c52-4fdf-913e-2f2e43c4e409" containerName="watcher-db-sync" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550873 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="510cfe06-8c29-40c1-abb9-0290e4d93541" containerName="mariadb-database-create" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550880 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c27257c-47e3-46d2-9324-70c85dd9e6ed" containerName="mariadb-account-create-update" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.550890 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="22837145-ddd2-4606-bc52-d633720bdeb2" containerName="dnsmasq-dns" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.552005 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.562283 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.568282 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.568652 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.568794 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.573251 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7vv8q" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.608588 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c9cffc67f-n9h82"] Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.655286 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.680711 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m6cfw"] Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.731680 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-nb\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.731831 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-scripts\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.731942 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-swift-storage-0\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.732123 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-credential-keys\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.732358 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-svc\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.732417 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbcx4\" (UniqueName: \"kubernetes.io/projected/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-kube-api-access-jbcx4\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.732595 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-combined-ca-bundle\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.744465 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-sb\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.745415 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-config-data\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.745507 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-fernet-keys\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.745628 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8vkq\" (UniqueName: \"kubernetes.io/projected/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-kube-api-access-r8vkq\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.745665 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-config\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.794620 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c9cffc67f-n9h82"] Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.827484 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.829437 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.838457 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.845624 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.845967 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-tkr8k" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847411 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-svc\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847466 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-config-data\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847489 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbcx4\" (UniqueName: \"kubernetes.io/projected/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-kube-api-access-jbcx4\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847550 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-combined-ca-bundle\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847588 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847613 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-sb\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847633 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7b7d8c3-8660-4e66-b15b-67b4d554b683-logs\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847659 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-config-data\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847689 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-fernet-keys\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847716 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-config\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847734 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8vkq\" (UniqueName: \"kubernetes.io/projected/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-kube-api-access-r8vkq\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847769 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847789 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-nb\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847808 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-scripts\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847834 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9pml\" (UniqueName: \"kubernetes.io/projected/e7b7d8c3-8660-4e66-b15b-67b4d554b683-kube-api-access-d9pml\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847869 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-swift-storage-0\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.847913 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-credential-keys\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.848893 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.856755 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-svc\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.857838 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-nb\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.858421 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-sb\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.859365 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-swift-storage-0\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.859921 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.859971 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-config\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.862719 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.870950 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-config-data\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.871146 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.893569 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-combined-ca-bundle\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.893908 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-scripts\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.893958 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.895672 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.903300 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-fernet-keys\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.925404 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-credential-keys\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.926248 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.926556 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.953381 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8vkq\" (UniqueName: \"kubernetes.io/projected/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-kube-api-access-r8vkq\") pod \"keystone-bootstrap-m6cfw\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.954075 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbcx4\" (UniqueName: \"kubernetes.io/projected/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-kube-api-access-jbcx4\") pod \"dnsmasq-dns-c9cffc67f-n9h82\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955222 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8psq\" (UniqueName: \"kubernetes.io/projected/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-kube-api-access-h8psq\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955282 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7b7d8c3-8660-4e66-b15b-67b4d554b683-logs\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955308 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-logs\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955341 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955373 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955393 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955420 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955449 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955464 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-config-data\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955492 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9pml\" (UniqueName: \"kubernetes.io/projected/e7b7d8c3-8660-4e66-b15b-67b4d554b683-kube-api-access-d9pml\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955547 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc9vj\" (UniqueName: \"kubernetes.io/projected/3873abc6-2d46-4624-84d9-b53559d1d83f-kube-api-access-mc9vj\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955575 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-config-data\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955606 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3873abc6-2d46-4624-84d9-b53559d1d83f-logs\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.955640 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.956511 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7b7d8c3-8660-4e66-b15b-67b4d554b683-logs\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.965312 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-p4vk6"] Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.966100 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.966868 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.967012 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.974287 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jj8w9" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.974666 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-config-data\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.974845 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 10:06:47 crc kubenswrapper[4814]: I0216 10:06:47.988083 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.015712 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-p4vk6"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.018279 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9pml\" (UniqueName: \"kubernetes.io/projected/e7b7d8c3-8660-4e66-b15b-67b4d554b683-kube-api-access-d9pml\") pod \"watcher-api-0\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " pod="openstack/watcher-api-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.021675 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-chlgm"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.045312 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.053664 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-chlgm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.057463 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.058332 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-combined-ca-bundle\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.058763 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc9vj\" (UniqueName: \"kubernetes.io/projected/3873abc6-2d46-4624-84d9-b53559d1d83f-kube-api-access-mc9vj\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.058803 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-scripts\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.058830 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-config-data\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.058854 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3873abc6-2d46-4624-84d9-b53559d1d83f-logs\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.058877 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75r9b\" (UniqueName: \"kubernetes.io/projected/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-kube-api-access-75r9b\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.058921 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8psq\" (UniqueName: \"kubernetes.io/projected/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-kube-api-access-h8psq\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.058947 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-logs\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.058976 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.058999 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.059019 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.059038 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-db-sync-config-data\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.059065 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.059088 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-etc-machine-id\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.059112 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-config-data\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.065552 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3873abc6-2d46-4624-84d9-b53559d1d83f-logs\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.065745 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-chlgm"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.069218 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.070848 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4fj2k" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.071288 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-logs\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.084775 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-config-data\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.089226 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.092600 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.100516 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.109586 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c9b5486bf-67f5h"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.110196 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.111419 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.121396 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-zb66p" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.121483 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.124581 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.124976 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.129211 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c9b5486bf-67f5h"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.150569 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8psq\" (UniqueName: \"kubernetes.io/projected/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-kube-api-access-h8psq\") pod \"watcher-applier-0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " pod="openstack/watcher-applier-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.161706 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc9vj\" (UniqueName: \"kubernetes.io/projected/3873abc6-2d46-4624-84d9-b53559d1d83f-kube-api-access-mc9vj\") pod \"watcher-decision-engine-0\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173229 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-db-sync-config-data\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173308 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-combined-ca-bundle\") pod \"neutron-db-sync-chlgm\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " pod="openstack/neutron-db-sync-chlgm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173388 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-etc-machine-id\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173428 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-logs\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173464 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-horizon-secret-key\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173495 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zvcl\" (UniqueName: \"kubernetes.io/projected/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-kube-api-access-7zvcl\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173612 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-config-data\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173809 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-combined-ca-bundle\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173886 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-config\") pod \"neutron-db-sync-chlgm\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " pod="openstack/neutron-db-sync-chlgm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173918 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmcnh\" (UniqueName: \"kubernetes.io/projected/ac000d0d-d120-4828-b60f-3c2e3371dc68-kube-api-access-nmcnh\") pod \"neutron-db-sync-chlgm\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " pod="openstack/neutron-db-sync-chlgm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173940 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-scripts\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.173965 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-config-data\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.174002 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-scripts\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.174043 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75r9b\" (UniqueName: \"kubernetes.io/projected/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-kube-api-access-75r9b\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.175112 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.175929 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.176709 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-etc-machine-id\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.182253 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-combined-ca-bundle\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.184218 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.190805 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-config-data\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.200372 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-db-sync-config-data\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.224231 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-scripts\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.232950 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9cffc67f-n9h82"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.243210 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75r9b\" (UniqueName: \"kubernetes.io/projected/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-kube-api-access-75r9b\") pod \"cinder-db-sync-p4vk6\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.257415 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.279932 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-combined-ca-bundle\") pod \"neutron-db-sync-chlgm\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " pod="openstack/neutron-db-sync-chlgm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.279997 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-logs\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.280021 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-horizon-secret-key\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.280045 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zvcl\" (UniqueName: \"kubernetes.io/projected/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-kube-api-access-7zvcl\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.280088 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-config-data\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.280138 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-config\") pod \"neutron-db-sync-chlgm\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " pod="openstack/neutron-db-sync-chlgm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.280157 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmcnh\" (UniqueName: \"kubernetes.io/projected/ac000d0d-d120-4828-b60f-3c2e3371dc68-kube-api-access-nmcnh\") pod \"neutron-db-sync-chlgm\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " pod="openstack/neutron-db-sync-chlgm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.280187 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-scripts\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.284379 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-logs\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.284691 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-config-data\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.286440 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-scripts\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.293113 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-47nvw"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.295031 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-47nvw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.303714 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-combined-ca-bundle\") pod \"neutron-db-sync-chlgm\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " pod="openstack/neutron-db-sync-chlgm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.304567 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vsf7q" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.304884 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.310445 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-config\") pod \"neutron-db-sync-chlgm\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " pod="openstack/neutron-db-sync-chlgm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.315255 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-horizon-secret-key\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.315628 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.323794 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmcnh\" (UniqueName: \"kubernetes.io/projected/ac000d0d-d120-4828-b60f-3c2e3371dc68-kube-api-access-nmcnh\") pod \"neutron-db-sync-chlgm\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " pod="openstack/neutron-db-sync-chlgm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.334658 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-chlgm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.340440 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zvcl\" (UniqueName: \"kubernetes.io/projected/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-kube-api-access-7zvcl\") pod \"horizon-7c9b5486bf-67f5h\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.382869 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-db-sync-config-data\") pod \"barbican-db-sync-47nvw\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " pod="openstack/barbican-db-sync-47nvw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.383010 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsm47\" (UniqueName: \"kubernetes.io/projected/e57a813a-2457-4800-8eef-a91c409659f3-kube-api-access-xsm47\") pod \"barbican-db-sync-47nvw\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " pod="openstack/barbican-db-sync-47nvw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.383223 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-combined-ca-bundle\") pod \"barbican-db-sync-47nvw\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " pod="openstack/barbican-db-sync-47nvw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.415573 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.435825 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-47nvw"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.445159 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58cd7864d7-r6jwm"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.447337 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.486009 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-combined-ca-bundle\") pod \"barbican-db-sync-47nvw\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " pod="openstack/barbican-db-sync-47nvw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.486082 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-db-sync-config-data\") pod \"barbican-db-sync-47nvw\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " pod="openstack/barbican-db-sync-47nvw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.486123 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-svc\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.486162 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-sb\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.486183 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-config\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.486218 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsm47\" (UniqueName: \"kubernetes.io/projected/e57a813a-2457-4800-8eef-a91c409659f3-kube-api-access-xsm47\") pod \"barbican-db-sync-47nvw\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " pod="openstack/barbican-db-sync-47nvw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.486269 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vnjk\" (UniqueName: \"kubernetes.io/projected/2f2ed9b7-2884-4466-a5b3-f09640444423-kube-api-access-2vnjk\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.486362 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-nb\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.486423 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-swift-storage-0\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.501712 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-combined-ca-bundle\") pod \"barbican-db-sync-47nvw\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " pod="openstack/barbican-db-sync-47nvw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.502412 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-db-sync-config-data\") pod \"barbican-db-sync-47nvw\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " pod="openstack/barbican-db-sync-47nvw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.510898 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58cd7864d7-r6jwm"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.535468 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8lvv6"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.537304 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.537753 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsm47\" (UniqueName: \"kubernetes.io/projected/e57a813a-2457-4800-8eef-a91c409659f3-kube-api-access-xsm47\") pod \"barbican-db-sync-47nvw\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " pod="openstack/barbican-db-sync-47nvw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.540387 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2b76v" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.540862 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.546161 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.561444 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.609830 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.617496 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-nb\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.620255 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-swift-storage-0\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.620382 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-svc\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.620784 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-sb\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.620831 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-config\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.625564 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-svc\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.627937 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.632332 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.620941 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vnjk\" (UniqueName: \"kubernetes.io/projected/2f2ed9b7-2884-4466-a5b3-f09640444423-kube-api-access-2vnjk\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.637977 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-sb\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.638907 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-config\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.639239 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-swift-storage-0\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.650969 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-nb\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.671396 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vnjk\" (UniqueName: \"kubernetes.io/projected/2f2ed9b7-2884-4466-a5b3-f09640444423-kube-api-access-2vnjk\") pod \"dnsmasq-dns-58cd7864d7-r6jwm\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.695335 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8lvv6"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.714959 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.734127 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77df547889-kjrxc"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.736237 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.741914 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8cs4\" (UniqueName: \"kubernetes.io/projected/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-kube-api-access-k8cs4\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.741985 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-config-data\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.742013 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.742137 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-log-httpd\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.742160 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffzz8\" (UniqueName: \"kubernetes.io/projected/e5a3e754-132c-4c4e-9593-91ca3f391363-kube-api-access-ffzz8\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.742217 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-combined-ca-bundle\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.742242 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.742271 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-config-data\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.742301 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-run-httpd\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.742331 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-scripts\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.742368 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a3e754-132c-4c4e-9593-91ca3f391363-logs\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.742385 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-scripts\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.773689 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-47nvw" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.793259 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77df547889-kjrxc"] Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.820914 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844315 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a3e754-132c-4c4e-9593-91ca3f391363-logs\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844379 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-scripts\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844414 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8cs4\" (UniqueName: \"kubernetes.io/projected/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-kube-api-access-k8cs4\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844452 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-config-data\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844472 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844594 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-horizon-secret-key\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844629 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-log-httpd\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844665 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffzz8\" (UniqueName: \"kubernetes.io/projected/e5a3e754-132c-4c4e-9593-91ca3f391363-kube-api-access-ffzz8\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844730 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-logs\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844762 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sht7v\" (UniqueName: \"kubernetes.io/projected/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-kube-api-access-sht7v\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844799 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-combined-ca-bundle\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844839 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844866 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-config-data\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844942 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-scripts\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.844986 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-config-data\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.845029 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-run-httpd\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.845087 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-scripts\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.848484 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a3e754-132c-4c4e-9593-91ca3f391363-logs\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.848855 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-log-httpd\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.848982 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-run-httpd\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.857010 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-combined-ca-bundle\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.858155 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-scripts\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.861234 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.862010 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-config-data\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.867309 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-config-data\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.867738 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffzz8\" (UniqueName: \"kubernetes.io/projected/e5a3e754-132c-4c4e-9593-91ca3f391363-kube-api-access-ffzz8\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.872262 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8cs4\" (UniqueName: \"kubernetes.io/projected/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-kube-api-access-k8cs4\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.881355 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " pod="openstack/ceilometer-0" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.881681 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-scripts\") pod \"placement-db-sync-8lvv6\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.911118 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8lvv6" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.962318 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-horizon-secret-key\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.962485 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-logs\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.962587 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sht7v\" (UniqueName: \"kubernetes.io/projected/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-kube-api-access-sht7v\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.962705 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-scripts\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.962821 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-config-data\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.968644 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-logs\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.970959 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-scripts\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:48 crc kubenswrapper[4814]: I0216 10:06:48.979513 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-config-data\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:49 crc kubenswrapper[4814]: I0216 10:06:49.004144 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sht7v\" (UniqueName: \"kubernetes.io/projected/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-kube-api-access-sht7v\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:49 crc kubenswrapper[4814]: I0216 10:06:49.005632 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-horizon-secret-key\") pod \"horizon-77df547889-kjrxc\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:49 crc kubenswrapper[4814]: I0216 10:06:49.040363 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:06:49 crc kubenswrapper[4814]: I0216 10:06:49.067620 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:06:49 crc kubenswrapper[4814]: I0216 10:06:49.254427 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9cffc67f-n9h82"] Feb 16 10:06:49 crc kubenswrapper[4814]: I0216 10:06:49.362696 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" event={"ID":"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5","Type":"ContainerStarted","Data":"3783719a8f8cd8b3c920cd3ade015dc92ec142c17f46da925e10d65376a5678c"} Feb 16 10:06:49 crc kubenswrapper[4814]: I0216 10:06:49.566358 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:06:49 crc kubenswrapper[4814]: I0216 10:06:49.595971 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-chlgm"] Feb 16 10:06:49 crc kubenswrapper[4814]: I0216 10:06:49.775194 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m6cfw"] Feb 16 10:06:49 crc kubenswrapper[4814]: I0216 10:06:49.961573 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c9b5486bf-67f5h"] Feb 16 10:06:49 crc kubenswrapper[4814]: I0216 10:06:49.989737 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.106264 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:06:50 crc kubenswrapper[4814]: W0216 10:06:50.123892 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3873abc6_2d46_4624_84d9_b53559d1d83f.slice/crio-0031a408206e05eafc57201df02ab1b45792d4d9c15ae470431e885255de81aa WatchSource:0}: Error finding container 0031a408206e05eafc57201df02ab1b45792d4d9c15ae470431e885255de81aa: Status 404 returned error can't find the container with id 0031a408206e05eafc57201df02ab1b45792d4d9c15ae470431e885255de81aa Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.385327 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-47nvw"] Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.394597 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-p4vk6"] Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.402675 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58cd7864d7-r6jwm"] Feb 16 10:06:50 crc kubenswrapper[4814]: W0216 10:06:50.410486 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode57a813a_2457_4800_8eef_a91c409659f3.slice/crio-95a21e7d95c2fc800660ac153f74fa6b9fb973284d9edea9714ecf68dbf89403 WatchSource:0}: Error finding container 95a21e7d95c2fc800660ac153f74fa6b9fb973284d9edea9714ecf68dbf89403: Status 404 returned error can't find the container with id 95a21e7d95c2fc800660ac153f74fa6b9fb973284d9edea9714ecf68dbf89403 Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.420919 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-chlgm" event={"ID":"ac000d0d-d120-4828-b60f-3c2e3371dc68","Type":"ContainerStarted","Data":"acbf006fee012b44f22e856025634e5be593954e8ef65de06909047c7cac5cba"} Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.420988 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-chlgm" event={"ID":"ac000d0d-d120-4828-b60f-3c2e3371dc68","Type":"ContainerStarted","Data":"9098f5aa96920b3d3c0016544c3e88a53a5ac435a5bd41c4234779c74418fa43"} Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.431300 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c9b5486bf-67f5h" event={"ID":"36f76141-0fc0-4be0-9f9e-d5bd3e662d91","Type":"ContainerStarted","Data":"1c22e189657935fd5d5e3a4974d7664203d0203cdf40f521ab66936560ef2c03"} Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.439258 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8lvv6"] Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.452138 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e7b7d8c3-8660-4e66-b15b-67b4d554b683","Type":"ContainerStarted","Data":"94904605b4526eb2b339409e32926a027f164056f9a692bb18443596ddeb2c7a"} Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.452205 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e7b7d8c3-8660-4e66-b15b-67b4d554b683","Type":"ContainerStarted","Data":"fbd8ed33dbc81a8ca9001bfcb556058930fa7e84a558f188fac95437b4a46bf0"} Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.461209 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-chlgm" podStartSLOduration=3.461169882 podStartE2EDuration="3.461169882s" podCreationTimestamp="2026-02-16 10:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:06:50.452674327 +0000 UTC m=+1268.145830507" watchObservedRunningTime="2026-02-16 10:06:50.461169882 +0000 UTC m=+1268.154326062" Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.468914 4814 generic.go:334] "Generic (PLEG): container finished" podID="332682c6-8779-42d6-8445-1be863b81659" containerID="c61937128bff8df80b337778f162e754deeb832e6f35f9ef72e31ab3fe7a6c2d" exitCode=0 Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.469091 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9znsh" event={"ID":"332682c6-8779-42d6-8445-1be863b81659","Type":"ContainerDied","Data":"c61937128bff8df80b337778f162e754deeb832e6f35f9ef72e31ab3fe7a6c2d"} Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.489998 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77df547889-kjrxc"] Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.492674 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"278f75a7-f7ec-4e83-9c09-83ceb414b5a0","Type":"ContainerStarted","Data":"a1cd53bd595c768f37ac9e87fd298902a74caa002c83887a5e5be88398472a56"} Feb 16 10:06:50 crc kubenswrapper[4814]: W0216 10:06:50.503181 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d32f99f_7ecb_46e1_86b0_069ffcf7336d.slice/crio-0cb8d32d14eae447fafc74fa6cb68f634dc9c4c5a63895a75ad4b5897f3f121a WatchSource:0}: Error finding container 0cb8d32d14eae447fafc74fa6cb68f634dc9c4c5a63895a75ad4b5897f3f121a: Status 404 returned error can't find the container with id 0cb8d32d14eae447fafc74fa6cb68f634dc9c4c5a63895a75ad4b5897f3f121a Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.505251 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3873abc6-2d46-4624-84d9-b53559d1d83f","Type":"ContainerStarted","Data":"0031a408206e05eafc57201df02ab1b45792d4d9c15ae470431e885255de81aa"} Feb 16 10:06:50 crc kubenswrapper[4814]: W0216 10:06:50.507693 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc86bfb21_74b9_406a_ae57_635d5ee7e5fd.slice/crio-6babd313f598b38921defef809aea84aea6b75959b19dee743bd0e9219bd5d1e WatchSource:0}: Error finding container 6babd313f598b38921defef809aea84aea6b75959b19dee743bd0e9219bd5d1e: Status 404 returned error can't find the container with id 6babd313f598b38921defef809aea84aea6b75959b19dee743bd0e9219bd5d1e Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.550604 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.560786 4814 generic.go:334] "Generic (PLEG): container finished" podID="5ebcd05d-68b9-45b3-95a7-e7d52a8678d5" containerID="a553ed81710870dfdb259800896754b57fe335210bb276aca831b8f18458de80" exitCode=0 Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.561917 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" event={"ID":"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5","Type":"ContainerDied","Data":"a553ed81710870dfdb259800896754b57fe335210bb276aca831b8f18458de80"} Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.596439 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m6cfw" event={"ID":"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5","Type":"ContainerStarted","Data":"c931e684eccaf441bbf7e8ff7254141e673975409d35a5c1bd1ac8b68187b239"} Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.596508 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m6cfw" event={"ID":"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5","Type":"ContainerStarted","Data":"bc0679b9bc827f8998a3a1b97123d890c6d66e3d423a90db97619e980e14f033"} Feb 16 10:06:50 crc kubenswrapper[4814]: I0216 10:06:50.644944 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-m6cfw" podStartSLOduration=3.64492239 podStartE2EDuration="3.64492239s" podCreationTimestamp="2026-02-16 10:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:06:50.642890934 +0000 UTC m=+1268.336047134" watchObservedRunningTime="2026-02-16 10:06:50.64492239 +0000 UTC m=+1268.338078570" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.074368 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.117863 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.119208 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.291232 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-config\") pod \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.291431 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-swift-storage-0\") pod \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.291470 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbcx4\" (UniqueName: \"kubernetes.io/projected/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-kube-api-access-jbcx4\") pod \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.291520 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-nb\") pod \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.291608 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-svc\") pod \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.291633 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-sb\") pod \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\" (UID: \"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5\") " Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.329044 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5" (UID: "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.340139 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5" (UID: "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.340338 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5" (UID: "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.349830 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-kube-api-access-jbcx4" (OuterVolumeSpecName: "kube-api-access-jbcx4") pod "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5" (UID: "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5"). InnerVolumeSpecName "kube-api-access-jbcx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.375083 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-config" (OuterVolumeSpecName: "config") pod "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5" (UID: "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.375961 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5" (UID: "5ebcd05d-68b9-45b3-95a7-e7d52a8678d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.394650 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbcx4\" (UniqueName: \"kubernetes.io/projected/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-kube-api-access-jbcx4\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.394697 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.394713 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.394725 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.394737 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.394748 4814 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.636420 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-47nvw" event={"ID":"e57a813a-2457-4800-8eef-a91c409659f3","Type":"ContainerStarted","Data":"95a21e7d95c2fc800660ac153f74fa6b9fb973284d9edea9714ecf68dbf89403"} Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.679302 4814 generic.go:334] "Generic (PLEG): container finished" podID="2f2ed9b7-2884-4466-a5b3-f09640444423" containerID="f04dc61a04d093e5da76e9b8d74d12eaad287daf26aa72e72486e96e2a5dd404" exitCode=0 Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.679431 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" event={"ID":"2f2ed9b7-2884-4466-a5b3-f09640444423","Type":"ContainerDied","Data":"f04dc61a04d093e5da76e9b8d74d12eaad287daf26aa72e72486e96e2a5dd404"} Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.679462 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" event={"ID":"2f2ed9b7-2884-4466-a5b3-f09640444423","Type":"ContainerStarted","Data":"0ba16992855289067195531080e66942ae9f458910a596a5c6df65037114cacf"} Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.683257 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c86bfb21-74b9-406a-ae57-635d5ee7e5fd","Type":"ContainerStarted","Data":"6babd313f598b38921defef809aea84aea6b75959b19dee743bd0e9219bd5d1e"} Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.706753 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p4vk6" event={"ID":"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4","Type":"ContainerStarted","Data":"2b98731628f3e95a80a45f7af0aebd075860209975016487cd62ab41fdafd8c9"} Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.716196 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" event={"ID":"5ebcd05d-68b9-45b3-95a7-e7d52a8678d5","Type":"ContainerDied","Data":"3783719a8f8cd8b3c920cd3ade015dc92ec142c17f46da925e10d65376a5678c"} Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.716273 4814 scope.go:117] "RemoveContainer" containerID="a553ed81710870dfdb259800896754b57fe335210bb276aca831b8f18458de80" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.716428 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9cffc67f-n9h82" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.759970 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77df547889-kjrxc" event={"ID":"7d32f99f-7ecb-46e1-86b0-069ffcf7336d","Type":"ContainerStarted","Data":"0cb8d32d14eae447fafc74fa6cb68f634dc9c4c5a63895a75ad4b5897f3f121a"} Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.770850 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8lvv6" event={"ID":"e5a3e754-132c-4c4e-9593-91ca3f391363","Type":"ContainerStarted","Data":"db0959f756a3be14d65b73358850081c17f2470f207760f2b0bf1700427de0de"} Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.806256 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.827704 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e7b7d8c3-8660-4e66-b15b-67b4d554b683","Type":"ContainerStarted","Data":"f54d193ed03314782a30a03ae97f4c3196378f9e250d7cc3e1c3f3995f0ac24d"} Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.829446 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.844714 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.875785 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c9b5486bf-67f5h"] Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.922788 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9cffc67f-n9h82"] Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.953426 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c9cffc67f-n9h82"] Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.973738 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f55894665-vd6fz"] Feb 16 10:06:51 crc kubenswrapper[4814]: E0216 10:06:51.974356 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ebcd05d-68b9-45b3-95a7-e7d52a8678d5" containerName="init" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.974375 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebcd05d-68b9-45b3-95a7-e7d52a8678d5" containerName="init" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.974594 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ebcd05d-68b9-45b3-95a7-e7d52a8678d5" containerName="init" Feb 16 10:06:51 crc kubenswrapper[4814]: I0216 10:06:51.976058 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.003827 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f55894665-vd6fz"] Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.010610 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.010671 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.010652244 podStartE2EDuration="5.010652244s" podCreationTimestamp="2026-02-16 10:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:06:51.940524076 +0000 UTC m=+1269.633680256" watchObservedRunningTime="2026-02-16 10:06:52.010652244 +0000 UTC m=+1269.703808424" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.114641 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wsr6\" (UniqueName: \"kubernetes.io/projected/a7a61dcc-b9cc-4c92-b242-a4af907a0137-kube-api-access-9wsr6\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.114718 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-scripts\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.114819 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-config-data\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.114961 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a7a61dcc-b9cc-4c92-b242-a4af907a0137-horizon-secret-key\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.115033 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a61dcc-b9cc-4c92-b242-a4af907a0137-logs\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.220580 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-config-data\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.219274 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-config-data\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.220956 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a7a61dcc-b9cc-4c92-b242-a4af907a0137-horizon-secret-key\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.221840 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a61dcc-b9cc-4c92-b242-a4af907a0137-logs\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.222070 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wsr6\" (UniqueName: \"kubernetes.io/projected/a7a61dcc-b9cc-4c92-b242-a4af907a0137-kube-api-access-9wsr6\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.222113 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-scripts\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.222433 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a61dcc-b9cc-4c92-b242-a4af907a0137-logs\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.223973 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-scripts\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.226390 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a7a61dcc-b9cc-4c92-b242-a4af907a0137-horizon-secret-key\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.262902 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wsr6\" (UniqueName: \"kubernetes.io/projected/a7a61dcc-b9cc-4c92-b242-a4af907a0137-kube-api-access-9wsr6\") pod \"horizon-7f55894665-vd6fz\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.346834 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.844755 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api-log" containerID="cri-o://94904605b4526eb2b339409e32926a027f164056f9a692bb18443596ddeb2c7a" gracePeriod=30 Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.844876 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api" containerID="cri-o://f54d193ed03314782a30a03ae97f4c3196378f9e250d7cc3e1c3f3995f0ac24d" gracePeriod=30 Feb 16 10:06:52 crc kubenswrapper[4814]: I0216 10:06:52.879143 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": EOF" Feb 16 10:06:53 crc kubenswrapper[4814]: I0216 10:06:53.055359 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebcd05d-68b9-45b3-95a7-e7d52a8678d5" path="/var/lib/kubelet/pods/5ebcd05d-68b9-45b3-95a7-e7d52a8678d5/volumes" Feb 16 10:06:53 crc kubenswrapper[4814]: I0216 10:06:53.176777 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 10:06:53 crc kubenswrapper[4814]: I0216 10:06:53.896749 4814 generic.go:334] "Generic (PLEG): container finished" podID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerID="94904605b4526eb2b339409e32926a027f164056f9a692bb18443596ddeb2c7a" exitCode=143 Feb 16 10:06:53 crc kubenswrapper[4814]: I0216 10:06:53.896847 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e7b7d8c3-8660-4e66-b15b-67b4d554b683","Type":"ContainerDied","Data":"94904605b4526eb2b339409e32926a027f164056f9a692bb18443596ddeb2c7a"} Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.704252 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9znsh" Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.821772 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhmp2\" (UniqueName: \"kubernetes.io/projected/332682c6-8779-42d6-8445-1be863b81659-kube-api-access-dhmp2\") pod \"332682c6-8779-42d6-8445-1be863b81659\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.821840 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-combined-ca-bundle\") pod \"332682c6-8779-42d6-8445-1be863b81659\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.821976 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-config-data\") pod \"332682c6-8779-42d6-8445-1be863b81659\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.822138 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-db-sync-config-data\") pod \"332682c6-8779-42d6-8445-1be863b81659\" (UID: \"332682c6-8779-42d6-8445-1be863b81659\") " Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.833807 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332682c6-8779-42d6-8445-1be863b81659-kube-api-access-dhmp2" (OuterVolumeSpecName: "kube-api-access-dhmp2") pod "332682c6-8779-42d6-8445-1be863b81659" (UID: "332682c6-8779-42d6-8445-1be863b81659"). InnerVolumeSpecName "kube-api-access-dhmp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.833887 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "332682c6-8779-42d6-8445-1be863b81659" (UID: "332682c6-8779-42d6-8445-1be863b81659"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.855341 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "332682c6-8779-42d6-8445-1be863b81659" (UID: "332682c6-8779-42d6-8445-1be863b81659"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.901386 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-config-data" (OuterVolumeSpecName: "config-data") pod "332682c6-8779-42d6-8445-1be863b81659" (UID: "332682c6-8779-42d6-8445-1be863b81659"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.925191 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhmp2\" (UniqueName: \"kubernetes.io/projected/332682c6-8779-42d6-8445-1be863b81659-kube-api-access-dhmp2\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.925227 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.925238 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.925250 4814 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/332682c6-8779-42d6-8445-1be863b81659-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.930196 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9znsh" event={"ID":"332682c6-8779-42d6-8445-1be863b81659","Type":"ContainerDied","Data":"147f256ba6f6fe83920c73b4dd0602d6c99c0669817e9a24ace1f1994563c4c8"} Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.930242 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="147f256ba6f6fe83920c73b4dd0602d6c99c0669817e9a24ace1f1994563c4c8" Feb 16 10:06:54 crc kubenswrapper[4814]: I0216 10:06:54.930359 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9znsh" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.215694 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58cd7864d7-r6jwm"] Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.263088 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9476bf7d5-wqwks"] Feb 16 10:06:56 crc kubenswrapper[4814]: E0216 10:06:56.263765 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="332682c6-8779-42d6-8445-1be863b81659" containerName="glance-db-sync" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.263785 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="332682c6-8779-42d6-8445-1be863b81659" containerName="glance-db-sync" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.264076 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="332682c6-8779-42d6-8445-1be863b81659" containerName="glance-db-sync" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.265886 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.293192 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9476bf7d5-wqwks"] Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.361328 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-config\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.361387 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-sb\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.361452 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsfbb\" (UniqueName: \"kubernetes.io/projected/5328dae7-ac38-4d55-aa96-b7a3387cb13f-kube-api-access-gsfbb\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.361477 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-svc\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.361501 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-swift-storage-0\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.361522 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-nb\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.463860 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-config\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.463946 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-sb\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.464057 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsfbb\" (UniqueName: \"kubernetes.io/projected/5328dae7-ac38-4d55-aa96-b7a3387cb13f-kube-api-access-gsfbb\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.464101 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-svc\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.464147 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-swift-storage-0\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.464173 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-nb\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.465636 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-config\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.465956 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-sb\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.466181 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-nb\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.466226 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-swift-storage-0\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.466231 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-svc\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.493737 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsfbb\" (UniqueName: \"kubernetes.io/projected/5328dae7-ac38-4d55-aa96-b7a3387cb13f-kube-api-access-gsfbb\") pod \"dnsmasq-dns-9476bf7d5-wqwks\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.611373 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.940140 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77df547889-kjrxc"] Feb 16 10:06:56 crc kubenswrapper[4814]: I0216 10:06:56.975222 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f95b74b5b-mpwlg"] Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.016881 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.023289 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.099543 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-scripts\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.099627 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-tls-certs\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.099654 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-secret-key\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.099761 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-config-data\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.099840 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-combined-ca-bundle\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.099856 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d59ls\" (UniqueName: \"kubernetes.io/projected/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-kube-api-access-d59ls\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.099913 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-logs\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.124410 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f95b74b5b-mpwlg"] Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.124471 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.148761 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.156717 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.157256 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jjcbj" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.158255 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.180062 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.211376 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-scripts\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.211452 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtfkd\" (UniqueName: \"kubernetes.io/projected/83886ba5-6048-49e2-9750-add1874f5929-kube-api-access-jtfkd\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.211612 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.225812 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-config-data\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.226036 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.226070 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-combined-ca-bundle\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.226110 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-config-data\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.226137 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d59ls\" (UniqueName: \"kubernetes.io/projected/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-kube-api-access-d59ls\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.226156 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-logs\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.226205 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-logs\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.226298 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-scripts\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.226379 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-tls-certs\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.226422 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-secret-key\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.226662 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.235479 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-scripts\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.247152 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f55894665-vd6fz"] Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.260865 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.261552 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-logs\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.263516 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-tls-certs\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.266039 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-config-data\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.267742 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.275279 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-combined-ca-bundle\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.276413 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.279634 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-secret-key\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.287837 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76696f58b-dfzph"] Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.300859 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d59ls\" (UniqueName: \"kubernetes.io/projected/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-kube-api-access-d59ls\") pod \"horizon-6f95b74b5b-mpwlg\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.314592 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.330866 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.330970 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzxnl\" (UniqueName: \"kubernetes.io/projected/57fc4b96-e2a4-4505-8500-7f476f36f799-kube-api-access-fzxnl\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331003 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331072 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtfkd\" (UniqueName: \"kubernetes.io/projected/83886ba5-6048-49e2-9750-add1874f5929-kube-api-access-jtfkd\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331113 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-scripts\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331192 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331231 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331260 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331307 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-logs\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331403 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331439 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-config-data\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331462 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-logs\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331484 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.331622 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.336458 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.339896 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.340380 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-logs\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.347774 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-config-data\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.359004 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.359341 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtfkd\" (UniqueName: \"kubernetes.io/projected/83886ba5-6048-49e2-9750-add1874f5929-kube-api-access-jtfkd\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.360174 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.366780 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-scripts\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.428805 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.430210 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.436191 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.439119 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.436658 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzxnl\" (UniqueName: \"kubernetes.io/projected/57fc4b96-e2a4-4505-8500-7f476f36f799-kube-api-access-fzxnl\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.441407 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.441633 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.441670 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.441754 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-logs\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.441931 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.450768 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.458053 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76696f58b-dfzph"] Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.459946 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-logs\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.481577 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-scripts\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.482280 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.487663 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzxnl\" (UniqueName: \"kubernetes.io/projected/57fc4b96-e2a4-4505-8500-7f476f36f799-kube-api-access-fzxnl\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.490016 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.492470 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-config-data\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.518342 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.549373 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.549798 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4064477-94ed-4129-819b-63df1d34d227-config-data\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.550040 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4064477-94ed-4129-819b-63df1d34d227-combined-ca-bundle\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.550201 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4064477-94ed-4129-819b-63df1d34d227-horizon-secret-key\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.550401 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4064477-94ed-4129-819b-63df1d34d227-horizon-tls-certs\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.550477 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4064477-94ed-4129-819b-63df1d34d227-logs\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.550553 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4064477-94ed-4129-819b-63df1d34d227-scripts\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.550758 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhvzk\" (UniqueName: \"kubernetes.io/projected/d4064477-94ed-4129-819b-63df1d34d227-kube-api-access-hhvzk\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.654367 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4064477-94ed-4129-819b-63df1d34d227-horizon-tls-certs\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.654449 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4064477-94ed-4129-819b-63df1d34d227-logs\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.654494 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4064477-94ed-4129-819b-63df1d34d227-scripts\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.654678 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhvzk\" (UniqueName: \"kubernetes.io/projected/d4064477-94ed-4129-819b-63df1d34d227-kube-api-access-hhvzk\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.654738 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4064477-94ed-4129-819b-63df1d34d227-config-data\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.654803 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4064477-94ed-4129-819b-63df1d34d227-combined-ca-bundle\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.654840 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4064477-94ed-4129-819b-63df1d34d227-horizon-secret-key\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.656717 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4064477-94ed-4129-819b-63df1d34d227-logs\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.658751 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4064477-94ed-4129-819b-63df1d34d227-scripts\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.659962 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4064477-94ed-4129-819b-63df1d34d227-config-data\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.663429 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4064477-94ed-4129-819b-63df1d34d227-horizon-tls-certs\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.669777 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4064477-94ed-4129-819b-63df1d34d227-combined-ca-bundle\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.672953 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4064477-94ed-4129-819b-63df1d34d227-horizon-secret-key\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.679923 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhvzk\" (UniqueName: \"kubernetes.io/projected/d4064477-94ed-4129-819b-63df1d34d227-kube-api-access-hhvzk\") pod \"horizon-76696f58b-dfzph\" (UID: \"d4064477-94ed-4129-819b-63df1d34d227\") " pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.787326 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": read tcp 10.217.0.2:47890->10.217.0.151:9322: read: connection reset by peer" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.788119 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": dial tcp 10.217.0.151:9322: connect: connection refused" Feb 16 10:06:57 crc kubenswrapper[4814]: I0216 10:06:57.869826 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:06:58 crc kubenswrapper[4814]: I0216 10:06:58.015435 4814 generic.go:334] "Generic (PLEG): container finished" podID="a5cbae35-151a-4d4b-a0fc-417d4f4f60f5" containerID="c931e684eccaf441bbf7e8ff7254141e673975409d35a5c1bd1ac8b68187b239" exitCode=0 Feb 16 10:06:58 crc kubenswrapper[4814]: I0216 10:06:58.015517 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m6cfw" event={"ID":"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5","Type":"ContainerDied","Data":"c931e684eccaf441bbf7e8ff7254141e673975409d35a5c1bd1ac8b68187b239"} Feb 16 10:06:58 crc kubenswrapper[4814]: I0216 10:06:58.021693 4814 generic.go:334] "Generic (PLEG): container finished" podID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerID="f54d193ed03314782a30a03ae97f4c3196378f9e250d7cc3e1c3f3995f0ac24d" exitCode=0 Feb 16 10:06:58 crc kubenswrapper[4814]: I0216 10:06:58.021748 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e7b7d8c3-8660-4e66-b15b-67b4d554b683","Type":"ContainerDied","Data":"f54d193ed03314782a30a03ae97f4c3196378f9e250d7cc3e1c3f3995f0ac24d"} Feb 16 10:06:58 crc kubenswrapper[4814]: I0216 10:06:58.177937 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": dial tcp 10.217.0.151:9322: connect: connection refused" Feb 16 10:06:59 crc kubenswrapper[4814]: I0216 10:06:59.238907 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f55894665-vd6fz"] Feb 16 10:07:01 crc kubenswrapper[4814]: I0216 10:07:01.615787 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:07:01 crc kubenswrapper[4814]: I0216 10:07:01.761185 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:07:08 crc kubenswrapper[4814]: I0216 10:07:08.178070 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 10:07:13 crc kubenswrapper[4814]: I0216 10:07:13.179219 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 10:07:14 crc kubenswrapper[4814]: E0216 10:07:14.498458 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 16 10:07:14 crc kubenswrapper[4814]: E0216 10:07:14.499196 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 16 10:07:14 crc kubenswrapper[4814]: E0216 10:07:14.499454 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.164:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch96h648h5b7h59ch65bh674hf9h68bh5dfh566h587h5ddh9h5d9h5bch5d6h594h8fh557h8fhcbh67hbdh9bh579h557hf5h565h76h5b5h9fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sht7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-77df547889-kjrxc_openstack(7d32f99f-7ecb-46e1-86b0-069ffcf7336d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:07:14 crc kubenswrapper[4814]: E0216 10:07:14.503022 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-77df547889-kjrxc" podUID="7d32f99f-7ecb-46e1-86b0-069ffcf7336d" Feb 16 10:07:14 crc kubenswrapper[4814]: E0216 10:07:14.759574 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 16 10:07:14 crc kubenswrapper[4814]: E0216 10:07:14.760055 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-horizon:watcher_latest" Feb 16 10:07:14 crc kubenswrapper[4814]: E0216 10:07:14.760318 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.164:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56bh7dh5bbh67ch5b6h5bbh58ch67hdbh584h698h68hddhb8h7bh667h5d8h586h88h64fh578h5fch67bh548hffhc8h64hb5h658h5d5hc4hcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zvcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7c9b5486bf-67f5h_openstack(36f76141-0fc0-4be0-9f9e-d5bd3e662d91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:07:14 crc kubenswrapper[4814]: E0216 10:07:14.763112 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-7c9b5486bf-67f5h" podUID="36f76141-0fc0-4be0-9f9e-d5bd3e662d91" Feb 16 10:07:14 crc kubenswrapper[4814]: I0216 10:07:14.903310 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 10:07:14 crc kubenswrapper[4814]: I0216 10:07:14.912092 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:07:14 crc kubenswrapper[4814]: E0216 10:07:14.960083 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 16 10:07:14 crc kubenswrapper[4814]: E0216 10:07:14.960150 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest" Feb 16 10:07:14 crc kubenswrapper[4814]: E0216 10:07:14.960318 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.164:5001/podified-master-centos10/openstack-ceilometer-central:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59h75h647h75h68bh54bh667h5bh675hd9h57bh568h54fh5dfh8bh87h74h668h68fh59dh564h5dbh597h68chbbh58dh565h54bh697h57fhfdh578q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8cs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c86bfb21-74b9-406a-ae57-635d5ee7e5fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.045469 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7b7d8c3-8660-4e66-b15b-67b4d554b683-logs\") pod \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.045916 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-custom-prometheus-ca\") pod \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.045947 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-credential-keys\") pod \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.045973 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-config-data\") pod \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.046027 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-combined-ca-bundle\") pod \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.046105 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9pml\" (UniqueName: \"kubernetes.io/projected/e7b7d8c3-8660-4e66-b15b-67b4d554b683-kube-api-access-d9pml\") pod \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.046608 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7b7d8c3-8660-4e66-b15b-67b4d554b683-logs" (OuterVolumeSpecName: "logs") pod "e7b7d8c3-8660-4e66-b15b-67b4d554b683" (UID: "e7b7d8c3-8660-4e66-b15b-67b4d554b683"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.047097 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8vkq\" (UniqueName: \"kubernetes.io/projected/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-kube-api-access-r8vkq\") pod \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.047134 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-fernet-keys\") pod \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.047171 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-config-data\") pod \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\" (UID: \"e7b7d8c3-8660-4e66-b15b-67b4d554b683\") " Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.047262 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-combined-ca-bundle\") pod \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.047314 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-scripts\") pod \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\" (UID: \"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5\") " Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.048063 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7b7d8c3-8660-4e66-b15b-67b4d554b683-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.054527 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5" (UID: "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.054656 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-kube-api-access-r8vkq" (OuterVolumeSpecName: "kube-api-access-r8vkq") pod "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5" (UID: "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5"). InnerVolumeSpecName "kube-api-access-r8vkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.055926 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7b7d8c3-8660-4e66-b15b-67b4d554b683-kube-api-access-d9pml" (OuterVolumeSpecName: "kube-api-access-d9pml") pod "e7b7d8c3-8660-4e66-b15b-67b4d554b683" (UID: "e7b7d8c3-8660-4e66-b15b-67b4d554b683"). InnerVolumeSpecName "kube-api-access-d9pml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.056582 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-scripts" (OuterVolumeSpecName: "scripts") pod "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5" (UID: "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.066691 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5" (UID: "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.081065 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5" (UID: "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.083182 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-config-data" (OuterVolumeSpecName: "config-data") pod "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5" (UID: "a5cbae35-151a-4d4b-a0fc-417d4f4f60f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.084635 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7b7d8c3-8660-4e66-b15b-67b4d554b683" (UID: "e7b7d8c3-8660-4e66-b15b-67b4d554b683"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.091696 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e7b7d8c3-8660-4e66-b15b-67b4d554b683" (UID: "e7b7d8c3-8660-4e66-b15b-67b4d554b683"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.105566 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-config-data" (OuterVolumeSpecName: "config-data") pod "e7b7d8c3-8660-4e66-b15b-67b4d554b683" (UID: "e7b7d8c3-8660-4e66-b15b-67b4d554b683"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.150706 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.150774 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.150792 4814 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.150806 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.150821 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.150833 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9pml\" (UniqueName: \"kubernetes.io/projected/e7b7d8c3-8660-4e66-b15b-67b4d554b683-kube-api-access-d9pml\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.150846 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8vkq\" (UniqueName: \"kubernetes.io/projected/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-kube-api-access-r8vkq\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.150856 4814 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.150868 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7b7d8c3-8660-4e66-b15b-67b4d554b683-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.150880 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.259164 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m6cfw" event={"ID":"a5cbae35-151a-4d4b-a0fc-417d4f4f60f5","Type":"ContainerDied","Data":"bc0679b9bc827f8998a3a1b97123d890c6d66e3d423a90db97619e980e14f033"} Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.259214 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc0679b9bc827f8998a3a1b97123d890c6d66e3d423a90db97619e980e14f033" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.259194 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m6cfw" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.260747 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f55894665-vd6fz" event={"ID":"a7a61dcc-b9cc-4c92-b242-a4af907a0137","Type":"ContainerStarted","Data":"3ec2e155fa881adf3c3e1ed9fd66e3af4ad7443ee6683d4db49f6a2c0966c260"} Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.263580 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e7b7d8c3-8660-4e66-b15b-67b4d554b683","Type":"ContainerDied","Data":"fbd8ed33dbc81a8ca9001bfcb556058930fa7e84a558f188fac95437b4a46bf0"} Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.263674 4814 scope.go:117] "RemoveContainer" containerID="f54d193ed03314782a30a03ae97f4c3196378f9e250d7cc3e1c3f3995f0ac24d" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.263920 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.357165 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.385186 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.413345 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:07:15 crc kubenswrapper[4814]: E0216 10:07:15.414110 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5cbae35-151a-4d4b-a0fc-417d4f4f60f5" containerName="keystone-bootstrap" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.414295 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5cbae35-151a-4d4b-a0fc-417d4f4f60f5" containerName="keystone-bootstrap" Feb 16 10:07:15 crc kubenswrapper[4814]: E0216 10:07:15.414391 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api-log" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.414472 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api-log" Feb 16 10:07:15 crc kubenswrapper[4814]: E0216 10:07:15.414579 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.414670 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.415064 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api-log" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.415515 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.415764 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5cbae35-151a-4d4b-a0fc-417d4f4f60f5" containerName="keystone-bootstrap" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.417425 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.427953 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.432839 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.559662 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sxrx\" (UniqueName: \"kubernetes.io/projected/5911fde8-d13a-4c6a-941e-e25515983484-kube-api-access-9sxrx\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.559847 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.559881 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.559926 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5911fde8-d13a-4c6a-941e-e25515983484-logs\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.560280 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-config-data\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.662780 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.663323 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.663368 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5911fde8-d13a-4c6a-941e-e25515983484-logs\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.663472 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-config-data\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.663607 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sxrx\" (UniqueName: \"kubernetes.io/projected/5911fde8-d13a-4c6a-941e-e25515983484-kube-api-access-9sxrx\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.664441 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5911fde8-d13a-4c6a-941e-e25515983484-logs\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.671449 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-config-data\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.687106 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.687407 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sxrx\" (UniqueName: \"kubernetes.io/projected/5911fde8-d13a-4c6a-941e-e25515983484-kube-api-access-9sxrx\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.693512 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " pod="openstack/watcher-api-0" Feb 16 10:07:15 crc kubenswrapper[4814]: I0216 10:07:15.754819 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.144921 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-m6cfw"] Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.153124 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-m6cfw"] Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.247784 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-j4vhw"] Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.249156 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.253634 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.253900 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7vv8q" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.254060 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.254202 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.257844 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.263430 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j4vhw"] Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.377606 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-config-data\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.377674 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-combined-ca-bundle\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.377737 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-fernet-keys\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.377764 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvfm2\" (UniqueName: \"kubernetes.io/projected/a350ef7d-4057-40fd-807d-5b29d2b3b465-kube-api-access-vvfm2\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.377840 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-scripts\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.377897 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-credential-keys\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.480644 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-config-data\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.480973 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-combined-ca-bundle\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.481180 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-fernet-keys\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.481295 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvfm2\" (UniqueName: \"kubernetes.io/projected/a350ef7d-4057-40fd-807d-5b29d2b3b465-kube-api-access-vvfm2\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.481450 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-scripts\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.482085 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-credential-keys\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.488192 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-config-data\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.488870 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-fernet-keys\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.489218 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-scripts\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.494187 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-credential-keys\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.495784 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-combined-ca-bundle\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.510440 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvfm2\" (UniqueName: \"kubernetes.io/projected/a350ef7d-4057-40fd-807d-5b29d2b3b465-kube-api-access-vvfm2\") pod \"keystone-bootstrap-j4vhw\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:16 crc kubenswrapper[4814]: I0216 10:07:16.587685 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:17 crc kubenswrapper[4814]: I0216 10:07:17.008237 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5cbae35-151a-4d4b-a0fc-417d4f4f60f5" path="/var/lib/kubelet/pods/a5cbae35-151a-4d4b-a0fc-417d4f4f60f5/volumes" Feb 16 10:07:17 crc kubenswrapper[4814]: I0216 10:07:17.008949 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" path="/var/lib/kubelet/pods/e7b7d8c3-8660-4e66-b15b-67b4d554b683/volumes" Feb 16 10:07:18 crc kubenswrapper[4814]: I0216 10:07:18.180874 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e7b7d8c3-8660-4e66-b15b-67b4d554b683" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.151:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 10:07:26 crc kubenswrapper[4814]: E0216 10:07:26.371819 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 16 10:07:26 crc kubenswrapper[4814]: E0216 10:07:26.372747 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Feb 16 10:07:26 crc kubenswrapper[4814]: E0216 10:07:26.372984 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.164:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75r9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-p4vk6_openstack(c89e3cee-9acb-4b29-ab9a-ad50616aa9d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:07:26 crc kubenswrapper[4814]: E0216 10:07:26.374211 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-p4vk6" podUID="c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" Feb 16 10:07:26 crc kubenswrapper[4814]: E0216 10:07:26.398985 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-p4vk6" podUID="c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" Feb 16 10:07:26 crc kubenswrapper[4814]: E0216 10:07:26.774731 4814 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 16 10:07:26 crc kubenswrapper[4814]: E0216 10:07:26.774811 4814 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.164:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Feb 16 10:07:26 crc kubenswrapper[4814]: E0216 10:07:26.774970 4814 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.164:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xsm47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-47nvw_openstack(e57a813a-2457-4800-8eef-a91c409659f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 10:07:26 crc kubenswrapper[4814]: E0216 10:07:26.776230 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-47nvw" podUID="e57a813a-2457-4800-8eef-a91c409659f3" Feb 16 10:07:26 crc kubenswrapper[4814]: I0216 10:07:26.878122 4814 scope.go:117] "RemoveContainer" containerID="94904605b4526eb2b339409e32926a027f164056f9a692bb18443596ddeb2c7a" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.073139 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.090516 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.181711 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-logs\") pod \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.181887 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zvcl\" (UniqueName: \"kubernetes.io/projected/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-kube-api-access-7zvcl\") pod \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.182002 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-scripts\") pod \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.182216 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-config-data\") pod \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.182309 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-logs\") pod \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.182465 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sht7v\" (UniqueName: \"kubernetes.io/projected/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-kube-api-access-sht7v\") pod \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.182708 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-horizon-secret-key\") pod \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.182736 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-scripts\") pod \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.182788 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-horizon-secret-key\") pod \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\" (UID: \"7d32f99f-7ecb-46e1-86b0-069ffcf7336d\") " Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.182901 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-config-data\") pod \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\" (UID: \"36f76141-0fc0-4be0-9f9e-d5bd3e662d91\") " Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.183407 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-scripts" (OuterVolumeSpecName: "scripts") pod "36f76141-0fc0-4be0-9f9e-d5bd3e662d91" (UID: "36f76141-0fc0-4be0-9f9e-d5bd3e662d91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.183622 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-logs" (OuterVolumeSpecName: "logs") pod "36f76141-0fc0-4be0-9f9e-d5bd3e662d91" (UID: "36f76141-0fc0-4be0-9f9e-d5bd3e662d91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.183938 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.183960 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.186218 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-logs" (OuterVolumeSpecName: "logs") pod "7d32f99f-7ecb-46e1-86b0-069ffcf7336d" (UID: "7d32f99f-7ecb-46e1-86b0-069ffcf7336d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.186974 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-config-data" (OuterVolumeSpecName: "config-data") pod "7d32f99f-7ecb-46e1-86b0-069ffcf7336d" (UID: "7d32f99f-7ecb-46e1-86b0-069ffcf7336d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.187038 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-scripts" (OuterVolumeSpecName: "scripts") pod "7d32f99f-7ecb-46e1-86b0-069ffcf7336d" (UID: "7d32f99f-7ecb-46e1-86b0-069ffcf7336d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.188308 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-config-data" (OuterVolumeSpecName: "config-data") pod "36f76141-0fc0-4be0-9f9e-d5bd3e662d91" (UID: "36f76141-0fc0-4be0-9f9e-d5bd3e662d91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.188991 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-kube-api-access-7zvcl" (OuterVolumeSpecName: "kube-api-access-7zvcl") pod "36f76141-0fc0-4be0-9f9e-d5bd3e662d91" (UID: "36f76141-0fc0-4be0-9f9e-d5bd3e662d91"). InnerVolumeSpecName "kube-api-access-7zvcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.190922 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7d32f99f-7ecb-46e1-86b0-069ffcf7336d" (UID: "7d32f99f-7ecb-46e1-86b0-069ffcf7336d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.194703 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "36f76141-0fc0-4be0-9f9e-d5bd3e662d91" (UID: "36f76141-0fc0-4be0-9f9e-d5bd3e662d91"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.208417 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-kube-api-access-sht7v" (OuterVolumeSpecName: "kube-api-access-sht7v") pod "7d32f99f-7ecb-46e1-86b0-069ffcf7336d" (UID: "7d32f99f-7ecb-46e1-86b0-069ffcf7336d"). InnerVolumeSpecName "kube-api-access-sht7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.286758 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.286818 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.286830 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sht7v\" (UniqueName: \"kubernetes.io/projected/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-kube-api-access-sht7v\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.286843 4814 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.286854 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.286863 4814 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d32f99f-7ecb-46e1-86b0-069ffcf7336d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.286873 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.286883 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zvcl\" (UniqueName: \"kubernetes.io/projected/36f76141-0fc0-4be0-9f9e-d5bd3e662d91-kube-api-access-7zvcl\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.406231 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77df547889-kjrxc" event={"ID":"7d32f99f-7ecb-46e1-86b0-069ffcf7336d","Type":"ContainerDied","Data":"0cb8d32d14eae447fafc74fa6cb68f634dc9c4c5a63895a75ad4b5897f3f121a"} Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.406269 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77df547889-kjrxc" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.410240 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c9b5486bf-67f5h" event={"ID":"36f76141-0fc0-4be0-9f9e-d5bd3e662d91","Type":"ContainerDied","Data":"1c22e189657935fd5d5e3a4974d7664203d0203cdf40f521ab66936560ef2c03"} Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.410424 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c9b5486bf-67f5h" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.422679 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" podUID="2f2ed9b7-2884-4466-a5b3-f09640444423" containerName="dnsmasq-dns" containerID="cri-o://0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066" gracePeriod=10 Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.422959 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" event={"ID":"2f2ed9b7-2884-4466-a5b3-f09640444423","Type":"ContainerStarted","Data":"0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066"} Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.423015 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:07:27 crc kubenswrapper[4814]: E0216 10:07:27.425951 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.164:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-47nvw" podUID="e57a813a-2457-4800-8eef-a91c409659f3" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.445783 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9476bf7d5-wqwks"] Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.462074 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f95b74b5b-mpwlg"] Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.482988 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" podStartSLOduration=39.48295551 podStartE2EDuration="39.48295551s" podCreationTimestamp="2026-02-16 10:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:27.477219535 +0000 UTC m=+1305.170375715" watchObservedRunningTime="2026-02-16 10:07:27.48295551 +0000 UTC m=+1305.176111690" Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.531119 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c9b5486bf-67f5h"] Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.540790 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c9b5486bf-67f5h"] Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.577880 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77df547889-kjrxc"] Feb 16 10:07:27 crc kubenswrapper[4814]: I0216 10:07:27.585630 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77df547889-kjrxc"] Feb 16 10:07:27 crc kubenswrapper[4814]: W0216 10:07:27.669252 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ff1c1c3_2b57_4e33_a1ed_ca7ac7a63f40.slice/crio-ab213a8ad9dce11553ce6f3920c26820406ad0db5278067f092713707c97a6e3 WatchSource:0}: Error finding container ab213a8ad9dce11553ce6f3920c26820406ad0db5278067f092713707c97a6e3: Status 404 returned error can't find the container with id ab213a8ad9dce11553ce6f3920c26820406ad0db5278067f092713707c97a6e3 Feb 16 10:07:27 crc kubenswrapper[4814]: W0216 10:07:27.703201 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5328dae7_ac38_4d55_aa96_b7a3387cb13f.slice/crio-e7da8f99f7d79b8c3e33215ef7aa2b4fc68339d6c8ab9c06e5853e9059c3a2fb WatchSource:0}: Error finding container e7da8f99f7d79b8c3e33215ef7aa2b4fc68339d6c8ab9c06e5853e9059c3a2fb: Status 404 returned error can't find the container with id e7da8f99f7d79b8c3e33215ef7aa2b4fc68339d6c8ab9c06e5853e9059c3a2fb Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.285386 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76696f58b-dfzph"] Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.294397 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.450093 4814 generic.go:334] "Generic (PLEG): container finished" podID="2f2ed9b7-2884-4466-a5b3-f09640444423" containerID="0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066" exitCode=0 Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.450875 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.451706 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" event={"ID":"2f2ed9b7-2884-4466-a5b3-f09640444423","Type":"ContainerDied","Data":"0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066"} Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.451859 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cd7864d7-r6jwm" event={"ID":"2f2ed9b7-2884-4466-a5b3-f09640444423","Type":"ContainerDied","Data":"0ba16992855289067195531080e66942ae9f458910a596a5c6df65037114cacf"} Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.451909 4814 scope.go:117] "RemoveContainer" containerID="0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.455605 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76696f58b-dfzph" event={"ID":"d4064477-94ed-4129-819b-63df1d34d227","Type":"ContainerStarted","Data":"68280c4b94b0f247f65e7108ecb98b74df71060cc6cc8dbe2afe9f39b0e9667f"} Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.459433 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-sb\") pod \"2f2ed9b7-2884-4466-a5b3-f09640444423\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.459660 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-svc\") pod \"2f2ed9b7-2884-4466-a5b3-f09640444423\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.459720 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-config\") pod \"2f2ed9b7-2884-4466-a5b3-f09640444423\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.459819 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vnjk\" (UniqueName: \"kubernetes.io/projected/2f2ed9b7-2884-4466-a5b3-f09640444423-kube-api-access-2vnjk\") pod \"2f2ed9b7-2884-4466-a5b3-f09640444423\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.459933 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-nb\") pod \"2f2ed9b7-2884-4466-a5b3-f09640444423\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.460101 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-swift-storage-0\") pod \"2f2ed9b7-2884-4466-a5b3-f09640444423\" (UID: \"2f2ed9b7-2884-4466-a5b3-f09640444423\") " Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.463859 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f95b74b5b-mpwlg" event={"ID":"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40","Type":"ContainerStarted","Data":"ab213a8ad9dce11553ce6f3920c26820406ad0db5278067f092713707c97a6e3"} Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.478785 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f2ed9b7-2884-4466-a5b3-f09640444423-kube-api-access-2vnjk" (OuterVolumeSpecName: "kube-api-access-2vnjk") pod "2f2ed9b7-2884-4466-a5b3-f09640444423" (UID: "2f2ed9b7-2884-4466-a5b3-f09640444423"). InnerVolumeSpecName "kube-api-access-2vnjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.493471 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" event={"ID":"5328dae7-ac38-4d55-aa96-b7a3387cb13f","Type":"ContainerStarted","Data":"e7da8f99f7d79b8c3e33215ef7aa2b4fc68339d6c8ab9c06e5853e9059c3a2fb"} Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.534002 4814 scope.go:117] "RemoveContainer" containerID="f04dc61a04d093e5da76e9b8d74d12eaad287daf26aa72e72486e96e2a5dd404" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.563818 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vnjk\" (UniqueName: \"kubernetes.io/projected/2f2ed9b7-2884-4466-a5b3-f09640444423-kube-api-access-2vnjk\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.601894 4814 scope.go:117] "RemoveContainer" containerID="0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066" Feb 16 10:07:28 crc kubenswrapper[4814]: E0216 10:07:28.602497 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066\": container with ID starting with 0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066 not found: ID does not exist" containerID="0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.602571 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066"} err="failed to get container status \"0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066\": rpc error: code = NotFound desc = could not find container \"0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066\": container with ID starting with 0552f901134c88ba93276dd680913d4977839fed5ecbc329b5734d166b6f8066 not found: ID does not exist" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.602611 4814 scope.go:117] "RemoveContainer" containerID="f04dc61a04d093e5da76e9b8d74d12eaad287daf26aa72e72486e96e2a5dd404" Feb 16 10:07:28 crc kubenswrapper[4814]: E0216 10:07:28.602983 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04dc61a04d093e5da76e9b8d74d12eaad287daf26aa72e72486e96e2a5dd404\": container with ID starting with f04dc61a04d093e5da76e9b8d74d12eaad287daf26aa72e72486e96e2a5dd404 not found: ID does not exist" containerID="f04dc61a04d093e5da76e9b8d74d12eaad287daf26aa72e72486e96e2a5dd404" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.603009 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04dc61a04d093e5da76e9b8d74d12eaad287daf26aa72e72486e96e2a5dd404"} err="failed to get container status \"f04dc61a04d093e5da76e9b8d74d12eaad287daf26aa72e72486e96e2a5dd404\": rpc error: code = NotFound desc = could not find container \"f04dc61a04d093e5da76e9b8d74d12eaad287daf26aa72e72486e96e2a5dd404\": container with ID starting with f04dc61a04d093e5da76e9b8d74d12eaad287daf26aa72e72486e96e2a5dd404 not found: ID does not exist" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.656818 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-j4vhw"] Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.689340 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.695319 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2f2ed9b7-2884-4466-a5b3-f09640444423" (UID: "2f2ed9b7-2884-4466-a5b3-f09640444423"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:28 crc kubenswrapper[4814]: W0216 10:07:28.714219 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5911fde8_d13a_4c6a_941e_e25515983484.slice/crio-c4773f8ad50cd990499809229249dabc0148a335030457a2c82413f5c670ca47 WatchSource:0}: Error finding container c4773f8ad50cd990499809229249dabc0148a335030457a2c82413f5c670ca47: Status 404 returned error can't find the container with id c4773f8ad50cd990499809229249dabc0148a335030457a2c82413f5c670ca47 Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.768228 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.828618 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f2ed9b7-2884-4466-a5b3-f09640444423" (UID: "2f2ed9b7-2884-4466-a5b3-f09640444423"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.831842 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.856265 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2f2ed9b7-2884-4466-a5b3-f09640444423" (UID: "2f2ed9b7-2884-4466-a5b3-f09640444423"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.871065 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.871121 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.877698 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2f2ed9b7-2884-4466-a5b3-f09640444423" (UID: "2f2ed9b7-2884-4466-a5b3-f09640444423"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.890104 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-config" (OuterVolumeSpecName: "config") pod "2f2ed9b7-2884-4466-a5b3-f09640444423" (UID: "2f2ed9b7-2884-4466-a5b3-f09640444423"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.984103 4814 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:28 crc kubenswrapper[4814]: I0216 10:07:28.984155 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f2ed9b7-2884-4466-a5b3-f09640444423-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.033897 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f76141-0fc0-4be0-9f9e-d5bd3e662d91" path="/var/lib/kubelet/pods/36f76141-0fc0-4be0-9f9e-d5bd3e662d91/volumes" Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.034369 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d32f99f-7ecb-46e1-86b0-069ffcf7336d" path="/var/lib/kubelet/pods/7d32f99f-7ecb-46e1-86b0-069ffcf7336d/volumes" Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.223598 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58cd7864d7-r6jwm"] Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.235186 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58cd7864d7-r6jwm"] Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.401125 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:07:29 crc kubenswrapper[4814]: W0216 10:07:29.424085 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83886ba5_6048_49e2_9750_add1874f5929.slice/crio-f7c4f364d9bf71f119df215a019fa9d17b80a091e30c563b51c82c70e8175511 WatchSource:0}: Error finding container f7c4f364d9bf71f119df215a019fa9d17b80a091e30c563b51c82c70e8175511: Status 404 returned error can't find the container with id f7c4f364d9bf71f119df215a019fa9d17b80a091e30c563b51c82c70e8175511 Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.535181 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83886ba5-6048-49e2-9750-add1874f5929","Type":"ContainerStarted","Data":"f7c4f364d9bf71f119df215a019fa9d17b80a091e30c563b51c82c70e8175511"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.542257 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j4vhw" event={"ID":"a350ef7d-4057-40fd-807d-5b29d2b3b465","Type":"ContainerStarted","Data":"10fb5bb4f397a76c0c678ba3167391f5faf496527465b5e668eceda1dc129228"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.542375 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j4vhw" event={"ID":"a350ef7d-4057-40fd-807d-5b29d2b3b465","Type":"ContainerStarted","Data":"6ba09cfbb594a7ad68f727eb63f4a74dbd0668d8c11d518bbae86aa7961d886e"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.559019 4814 generic.go:334] "Generic (PLEG): container finished" podID="5328dae7-ac38-4d55-aa96-b7a3387cb13f" containerID="0c6b6f7d375750fb058602d61231a032086938e536f163307aca36eae6c6521f" exitCode=0 Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.559146 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" event={"ID":"5328dae7-ac38-4d55-aa96-b7a3387cb13f","Type":"ContainerDied","Data":"0c6b6f7d375750fb058602d61231a032086938e536f163307aca36eae6c6521f"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.578196 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-j4vhw" podStartSLOduration=13.578168237 podStartE2EDuration="13.578168237s" podCreationTimestamp="2026-02-16 10:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:29.564894011 +0000 UTC m=+1307.258050191" watchObservedRunningTime="2026-02-16 10:07:29.578168237 +0000 UTC m=+1307.271324417" Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.583195 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76696f58b-dfzph" event={"ID":"d4064477-94ed-4129-819b-63df1d34d227","Type":"ContainerStarted","Data":"83ad7046ca7b1d0481161f1d88feb145e3cfd78c81ff96b2061fcde1eb8bf1ba"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.583243 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76696f58b-dfzph" event={"ID":"d4064477-94ed-4129-819b-63df1d34d227","Type":"ContainerStarted","Data":"411a0133e8483381e8bb607275109d2e67a3d8bf1b3255d6c8fbba8888fe9d6b"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.595244 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f95b74b5b-mpwlg" event={"ID":"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40","Type":"ContainerStarted","Data":"a3b7fbb9342bb2d0a8a281033cd00be9db237d5e9833458a68dc92e72f9e66ca"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.602903 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f55894665-vd6fz" event={"ID":"a7a61dcc-b9cc-4c92-b242-a4af907a0137","Type":"ContainerStarted","Data":"2bcd2eef0544b96319d58c16f6d66d501f48eb0514d788fedc88e1d79bf58d11"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.602954 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f55894665-vd6fz" event={"ID":"a7a61dcc-b9cc-4c92-b242-a4af907a0137","Type":"ContainerStarted","Data":"9193eee49c8d438afe4072cfc74fbac4288ac5a657c91a0b27084b4ead631a6c"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.603097 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f55894665-vd6fz" podUID="a7a61dcc-b9cc-4c92-b242-a4af907a0137" containerName="horizon-log" containerID="cri-o://9193eee49c8d438afe4072cfc74fbac4288ac5a657c91a0b27084b4ead631a6c" gracePeriod=30 Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.603173 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f55894665-vd6fz" podUID="a7a61dcc-b9cc-4c92-b242-a4af907a0137" containerName="horizon" containerID="cri-o://2bcd2eef0544b96319d58c16f6d66d501f48eb0514d788fedc88e1d79bf58d11" gracePeriod=30 Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.625257 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-76696f58b-dfzph" podStartSLOduration=32.625233021 podStartE2EDuration="32.625233021s" podCreationTimestamp="2026-02-16 10:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:29.615469015 +0000 UTC m=+1307.308625195" watchObservedRunningTime="2026-02-16 10:07:29.625233021 +0000 UTC m=+1307.318389201" Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.630493 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"278f75a7-f7ec-4e83-9c09-83ceb414b5a0","Type":"ContainerStarted","Data":"5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.655801 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c86bfb21-74b9-406a-ae57-635d5ee7e5fd","Type":"ContainerStarted","Data":"4751518f5963e87649e42f66d9c69d624f098c833c7af42d58ae4abb09a66803"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.656397 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7f55894665-vd6fz" podStartSLOduration=26.055355631 podStartE2EDuration="38.656382236s" podCreationTimestamp="2026-02-16 10:06:51 +0000 UTC" firstStartedPulling="2026-02-16 10:07:15.389798354 +0000 UTC m=+1293.082954534" lastFinishedPulling="2026-02-16 10:07:27.990824959 +0000 UTC m=+1305.683981139" observedRunningTime="2026-02-16 10:07:29.636617119 +0000 UTC m=+1307.329773319" watchObservedRunningTime="2026-02-16 10:07:29.656382236 +0000 UTC m=+1307.349538416" Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.669655 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=7.953670637 podStartE2EDuration="42.669635112s" podCreationTimestamp="2026-02-16 10:06:47 +0000 UTC" firstStartedPulling="2026-02-16 10:06:50.070312 +0000 UTC m=+1267.763468180" lastFinishedPulling="2026-02-16 10:07:24.786276485 +0000 UTC m=+1302.479432655" observedRunningTime="2026-02-16 10:07:29.665248464 +0000 UTC m=+1307.358404644" watchObservedRunningTime="2026-02-16 10:07:29.669635112 +0000 UTC m=+1307.362791302" Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.695480 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5911fde8-d13a-4c6a-941e-e25515983484","Type":"ContainerStarted","Data":"a1da293499103e696851c13f46701b761792f85dbe30cea54308a5edbc062afc"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.695572 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5911fde8-d13a-4c6a-941e-e25515983484","Type":"ContainerStarted","Data":"c4773f8ad50cd990499809229249dabc0148a335030457a2c82413f5c670ca47"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.697987 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57fc4b96-e2a4-4505-8500-7f476f36f799","Type":"ContainerStarted","Data":"abd3db39e886c6ba30c5622e4cf6c4942c951a85efb140f018241ca52f2ee5df"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.706186 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3873abc6-2d46-4624-84d9-b53559d1d83f","Type":"ContainerStarted","Data":"463ffe3a7b77654af3da62559b1b0e57d6037c101701a089cc6669b152c63ff3"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.735088 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=8.087785458 podStartE2EDuration="42.735067775s" podCreationTimestamp="2026-02-16 10:06:47 +0000 UTC" firstStartedPulling="2026-02-16 10:06:50.139123681 +0000 UTC m=+1267.832279861" lastFinishedPulling="2026-02-16 10:07:24.786405998 +0000 UTC m=+1302.479562178" observedRunningTime="2026-02-16 10:07:29.731017403 +0000 UTC m=+1307.424173603" watchObservedRunningTime="2026-02-16 10:07:29.735067775 +0000 UTC m=+1307.428223965" Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.742518 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8lvv6" event={"ID":"e5a3e754-132c-4c4e-9593-91ca3f391363","Type":"ContainerStarted","Data":"4d94f90ae5a3994a4186ff42d4734c084f593b9394d53811cb9ea2d928383a0b"} Feb 16 10:07:29 crc kubenswrapper[4814]: I0216 10:07:29.776102 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8lvv6" podStartSLOduration=5.279962684 podStartE2EDuration="41.776060517s" podCreationTimestamp="2026-02-16 10:06:48 +0000 UTC" firstStartedPulling="2026-02-16 10:06:50.418665957 +0000 UTC m=+1268.111822137" lastFinishedPulling="2026-02-16 10:07:26.91476379 +0000 UTC m=+1304.607919970" observedRunningTime="2026-02-16 10:07:29.767394093 +0000 UTC m=+1307.460550293" watchObservedRunningTime="2026-02-16 10:07:29.776060517 +0000 UTC m=+1307.469216697" Feb 16 10:07:30 crc kubenswrapper[4814]: I0216 10:07:30.761562 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57fc4b96-e2a4-4505-8500-7f476f36f799","Type":"ContainerStarted","Data":"78c61bfa9f0841c861e3f8a229dd24d50a8b3b141c54b4f540b9cfc6303b503b"} Feb 16 10:07:30 crc kubenswrapper[4814]: I0216 10:07:30.765510 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f95b74b5b-mpwlg" event={"ID":"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40","Type":"ContainerStarted","Data":"87871cb885e9b3909b68ea04065f4d9407f5cf1aba6d478b7386c5f4876768fd"} Feb 16 10:07:30 crc kubenswrapper[4814]: I0216 10:07:30.768188 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5911fde8-d13a-4c6a-941e-e25515983484","Type":"ContainerStarted","Data":"6a2a36eea931c3dfde895d8e48f7b2c745464991dcf338c3f79d2fbc6e89c232"} Feb 16 10:07:30 crc kubenswrapper[4814]: I0216 10:07:30.796849 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f95b74b5b-mpwlg" podStartSLOduration=34.261056609 podStartE2EDuration="34.796829208s" podCreationTimestamp="2026-02-16 10:06:56 +0000 UTC" firstStartedPulling="2026-02-16 10:07:27.677011283 +0000 UTC m=+1305.370167473" lastFinishedPulling="2026-02-16 10:07:28.212783882 +0000 UTC m=+1305.905940072" observedRunningTime="2026-02-16 10:07:30.788244975 +0000 UTC m=+1308.481401165" watchObservedRunningTime="2026-02-16 10:07:30.796829208 +0000 UTC m=+1308.489985388" Feb 16 10:07:31 crc kubenswrapper[4814]: I0216 10:07:31.007435 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f2ed9b7-2884-4466-a5b3-f09640444423" path="/var/lib/kubelet/pods/2f2ed9b7-2884-4466-a5b3-f09640444423/volumes" Feb 16 10:07:31 crc kubenswrapper[4814]: I0216 10:07:31.787571 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" event={"ID":"5328dae7-ac38-4d55-aa96-b7a3387cb13f","Type":"ContainerStarted","Data":"b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9"} Feb 16 10:07:31 crc kubenswrapper[4814]: I0216 10:07:31.795813 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57fc4b96-e2a4-4505-8500-7f476f36f799","Type":"ContainerStarted","Data":"3cb3e7efb8fc7a462d7b249d5b36781056b147b9ccd74858ca936a699c4d9b08"} Feb 16 10:07:31 crc kubenswrapper[4814]: I0216 10:07:31.801031 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83886ba5-6048-49e2-9750-add1874f5929","Type":"ContainerStarted","Data":"70181e14dfe49aa520a8a0cc43a4c6f8fedb72a86af76911b5adefdf67f203c3"} Feb 16 10:07:31 crc kubenswrapper[4814]: I0216 10:07:31.816985 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" podStartSLOduration=35.816933214 podStartE2EDuration="35.816933214s" podCreationTimestamp="2026-02-16 10:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:31.810391882 +0000 UTC m=+1309.503548062" watchObservedRunningTime="2026-02-16 10:07:31.816933214 +0000 UTC m=+1309.510089394" Feb 16 10:07:31 crc kubenswrapper[4814]: I0216 10:07:31.844809 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=16.844782193 podStartE2EDuration="16.844782193s" podCreationTimestamp="2026-02-16 10:07:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:31.833403585 +0000 UTC m=+1309.526559765" watchObservedRunningTime="2026-02-16 10:07:31.844782193 +0000 UTC m=+1309.537938373" Feb 16 10:07:32 crc kubenswrapper[4814]: I0216 10:07:32.347772 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:07:32 crc kubenswrapper[4814]: I0216 10:07:32.815150 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:07:33 crc kubenswrapper[4814]: I0216 10:07:33.258867 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 16 10:07:35 crc kubenswrapper[4814]: I0216 10:07:35.755190 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 10:07:35 crc kubenswrapper[4814]: I0216 10:07:35.756965 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 16 10:07:35 crc kubenswrapper[4814]: I0216 10:07:35.756983 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 10:07:35 crc kubenswrapper[4814]: I0216 10:07:35.894045 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="57fc4b96-e2a4-4505-8500-7f476f36f799" containerName="glance-log" containerID="cri-o://78c61bfa9f0841c861e3f8a229dd24d50a8b3b141c54b4f540b9cfc6303b503b" gracePeriod=30 Feb 16 10:07:35 crc kubenswrapper[4814]: I0216 10:07:35.894160 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="57fc4b96-e2a4-4505-8500-7f476f36f799" containerName="glance-httpd" containerID="cri-o://3cb3e7efb8fc7a462d7b249d5b36781056b147b9ccd74858ca936a699c4d9b08" gracePeriod=30 Feb 16 10:07:35 crc kubenswrapper[4814]: I0216 10:07:35.925320 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=38.925294791 podStartE2EDuration="38.925294791s" podCreationTimestamp="2026-02-16 10:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:35.921381742 +0000 UTC m=+1313.614537942" watchObservedRunningTime="2026-02-16 10:07:35.925294791 +0000 UTC m=+1313.618450971" Feb 16 10:07:36 crc kubenswrapper[4814]: I0216 10:07:36.614826 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:07:36 crc kubenswrapper[4814]: I0216 10:07:36.703199 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9996885f-mkwrs"] Feb 16 10:07:36 crc kubenswrapper[4814]: I0216 10:07:36.703582 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" podUID="a6929b69-85c9-4084-9ff5-4e3a6af602dd" containerName="dnsmasq-dns" containerID="cri-o://8136a7b9e0d23176422f169ef301f208938d614ee67b9aa08097ffa2eea1bc17" gracePeriod=10 Feb 16 10:07:36 crc kubenswrapper[4814]: I0216 10:07:36.795803 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/watcher-api-0" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 10:07:36 crc kubenswrapper[4814]: I0216 10:07:36.905994 4814 generic.go:334] "Generic (PLEG): container finished" podID="57fc4b96-e2a4-4505-8500-7f476f36f799" containerID="78c61bfa9f0841c861e3f8a229dd24d50a8b3b141c54b4f540b9cfc6303b503b" exitCode=143 Feb 16 10:07:36 crc kubenswrapper[4814]: I0216 10:07:36.906060 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57fc4b96-e2a4-4505-8500-7f476f36f799","Type":"ContainerDied","Data":"78c61bfa9f0841c861e3f8a229dd24d50a8b3b141c54b4f540b9cfc6303b503b"} Feb 16 10:07:37 crc kubenswrapper[4814]: I0216 10:07:37.360389 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:07:37 crc kubenswrapper[4814]: I0216 10:07:37.362105 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:07:37 crc kubenswrapper[4814]: I0216 10:07:37.870470 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:07:37 crc kubenswrapper[4814]: I0216 10:07:37.870557 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:07:38 crc kubenswrapper[4814]: I0216 10:07:38.175800 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 10:07:38 crc kubenswrapper[4814]: I0216 10:07:38.209368 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 16 10:07:38 crc kubenswrapper[4814]: I0216 10:07:38.259102 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 16 10:07:38 crc kubenswrapper[4814]: I0216 10:07:38.319220 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 16 10:07:38 crc kubenswrapper[4814]: I0216 10:07:38.953603 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83886ba5-6048-49e2-9750-add1874f5929","Type":"ContainerStarted","Data":"1ca0e15a8c6335eba0f51179a0ef84993248736ec2aadcd570683c7ec71c8636"} Feb 16 10:07:38 crc kubenswrapper[4814]: I0216 10:07:38.954398 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.020228 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.032019 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.133680 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.158595 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.176295 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" podUID="a6929b69-85c9-4084-9ff5-4e3a6af602dd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: connect: connection refused" Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.482352 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.977865 4814 generic.go:334] "Generic (PLEG): container finished" podID="a6929b69-85c9-4084-9ff5-4e3a6af602dd" containerID="8136a7b9e0d23176422f169ef301f208938d614ee67b9aa08097ffa2eea1bc17" exitCode=0 Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.978009 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" event={"ID":"a6929b69-85c9-4084-9ff5-4e3a6af602dd","Type":"ContainerDied","Data":"8136a7b9e0d23176422f169ef301f208938d614ee67b9aa08097ffa2eea1bc17"} Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.984264 4814 generic.go:334] "Generic (PLEG): container finished" podID="57fc4b96-e2a4-4505-8500-7f476f36f799" containerID="3cb3e7efb8fc7a462d7b249d5b36781056b147b9ccd74858ca936a699c4d9b08" exitCode=0 Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.984423 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57fc4b96-e2a4-4505-8500-7f476f36f799","Type":"ContainerDied","Data":"3cb3e7efb8fc7a462d7b249d5b36781056b147b9ccd74858ca936a699c4d9b08"} Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.985370 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="83886ba5-6048-49e2-9750-add1874f5929" containerName="glance-log" containerID="cri-o://70181e14dfe49aa520a8a0cc43a4c6f8fedb72a86af76911b5adefdf67f203c3" gracePeriod=30 Feb 16 10:07:39 crc kubenswrapper[4814]: I0216 10:07:39.985432 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="83886ba5-6048-49e2-9750-add1874f5929" containerName="glance-httpd" containerID="cri-o://1ca0e15a8c6335eba0f51179a0ef84993248736ec2aadcd570683c7ec71c8636" gracePeriod=30 Feb 16 10:07:40 crc kubenswrapper[4814]: I0216 10:07:40.026987 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=43.026959643 podStartE2EDuration="43.026959643s" podCreationTimestamp="2026-02-16 10:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:40.016955913 +0000 UTC m=+1317.710112093" watchObservedRunningTime="2026-02-16 10:07:40.026959643 +0000 UTC m=+1317.720115823" Feb 16 10:07:41 crc kubenswrapper[4814]: I0216 10:07:41.000874 4814 generic.go:334] "Generic (PLEG): container finished" podID="83886ba5-6048-49e2-9750-add1874f5929" containerID="1ca0e15a8c6335eba0f51179a0ef84993248736ec2aadcd570683c7ec71c8636" exitCode=0 Feb 16 10:07:41 crc kubenswrapper[4814]: I0216 10:07:41.001374 4814 generic.go:334] "Generic (PLEG): container finished" podID="83886ba5-6048-49e2-9750-add1874f5929" containerID="70181e14dfe49aa520a8a0cc43a4c6f8fedb72a86af76911b5adefdf67f203c3" exitCode=143 Feb 16 10:07:41 crc kubenswrapper[4814]: I0216 10:07:41.015071 4814 generic.go:334] "Generic (PLEG): container finished" podID="a350ef7d-4057-40fd-807d-5b29d2b3b465" containerID="10fb5bb4f397a76c0c678ba3167391f5faf496527465b5e668eceda1dc129228" exitCode=0 Feb 16 10:07:41 crc kubenswrapper[4814]: I0216 10:07:41.015321 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="278f75a7-f7ec-4e83-9c09-83ceb414b5a0" containerName="watcher-applier" containerID="cri-o://5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a" gracePeriod=30 Feb 16 10:07:41 crc kubenswrapper[4814]: I0216 10:07:41.015528 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="3873abc6-2d46-4624-84d9-b53559d1d83f" containerName="watcher-decision-engine" containerID="cri-o://463ffe3a7b77654af3da62559b1b0e57d6037c101701a089cc6669b152c63ff3" gracePeriod=30 Feb 16 10:07:41 crc kubenswrapper[4814]: I0216 10:07:41.016874 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83886ba5-6048-49e2-9750-add1874f5929","Type":"ContainerDied","Data":"1ca0e15a8c6335eba0f51179a0ef84993248736ec2aadcd570683c7ec71c8636"} Feb 16 10:07:41 crc kubenswrapper[4814]: I0216 10:07:41.016920 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83886ba5-6048-49e2-9750-add1874f5929","Type":"ContainerDied","Data":"70181e14dfe49aa520a8a0cc43a4c6f8fedb72a86af76911b5adefdf67f203c3"} Feb 16 10:07:41 crc kubenswrapper[4814]: I0216 10:07:41.016939 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j4vhw" event={"ID":"a350ef7d-4057-40fd-807d-5b29d2b3b465","Type":"ContainerDied","Data":"10fb5bb4f397a76c0c678ba3167391f5faf496527465b5e668eceda1dc129228"} Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.092374 4814 generic.go:334] "Generic (PLEG): container finished" podID="e5a3e754-132c-4c4e-9593-91ca3f391363" containerID="4d94f90ae5a3994a4186ff42d4734c084f593b9394d53811cb9ea2d928383a0b" exitCode=0 Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.093351 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8lvv6" event={"ID":"e5a3e754-132c-4c4e-9593-91ca3f391363","Type":"ContainerDied","Data":"4d94f90ae5a3994a4186ff42d4734c084f593b9394d53811cb9ea2d928383a0b"} Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.212143 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.296099 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-config\") pod \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.296238 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-sb\") pod \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.296303 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-swift-storage-0\") pod \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.296402 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-nb\") pod \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.296425 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j49ks\" (UniqueName: \"kubernetes.io/projected/a6929b69-85c9-4084-9ff5-4e3a6af602dd-kube-api-access-j49ks\") pod \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.296475 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-svc\") pod \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\" (UID: \"a6929b69-85c9-4084-9ff5-4e3a6af602dd\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.325667 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6929b69-85c9-4084-9ff5-4e3a6af602dd-kube-api-access-j49ks" (OuterVolumeSpecName: "kube-api-access-j49ks") pod "a6929b69-85c9-4084-9ff5-4e3a6af602dd" (UID: "a6929b69-85c9-4084-9ff5-4e3a6af602dd"). InnerVolumeSpecName "kube-api-access-j49ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.403426 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j49ks\" (UniqueName: \"kubernetes.io/projected/a6929b69-85c9-4084-9ff5-4e3a6af602dd-kube-api-access-j49ks\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.414519 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a6929b69-85c9-4084-9ff5-4e3a6af602dd" (UID: "a6929b69-85c9-4084-9ff5-4e3a6af602dd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.505584 4814 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.516052 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a6929b69-85c9-4084-9ff5-4e3a6af602dd" (UID: "a6929b69-85c9-4084-9ff5-4e3a6af602dd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.520244 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.622233 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.723799 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-config-data\") pod \"57fc4b96-e2a4-4505-8500-7f476f36f799\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.723865 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-scripts\") pod \"57fc4b96-e2a4-4505-8500-7f476f36f799\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.724045 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzxnl\" (UniqueName: \"kubernetes.io/projected/57fc4b96-e2a4-4505-8500-7f476f36f799-kube-api-access-fzxnl\") pod \"57fc4b96-e2a4-4505-8500-7f476f36f799\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.724117 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"57fc4b96-e2a4-4505-8500-7f476f36f799\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.724147 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-logs\") pod \"57fc4b96-e2a4-4505-8500-7f476f36f799\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.724183 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-httpd-run\") pod \"57fc4b96-e2a4-4505-8500-7f476f36f799\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.724277 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-combined-ca-bundle\") pod \"57fc4b96-e2a4-4505-8500-7f476f36f799\" (UID: \"57fc4b96-e2a4-4505-8500-7f476f36f799\") " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.727445 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-logs" (OuterVolumeSpecName: "logs") pod "57fc4b96-e2a4-4505-8500-7f476f36f799" (UID: "57fc4b96-e2a4-4505-8500-7f476f36f799"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.728079 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "57fc4b96-e2a4-4505-8500-7f476f36f799" (UID: "57fc4b96-e2a4-4505-8500-7f476f36f799"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.765026 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6929b69-85c9-4084-9ff5-4e3a6af602dd" (UID: "a6929b69-85c9-4084-9ff5-4e3a6af602dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.769189 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-scripts" (OuterVolumeSpecName: "scripts") pod "57fc4b96-e2a4-4505-8500-7f476f36f799" (UID: "57fc4b96-e2a4-4505-8500-7f476f36f799"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.770196 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "57fc4b96-e2a4-4505-8500-7f476f36f799" (UID: "57fc4b96-e2a4-4505-8500-7f476f36f799"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.779567 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57fc4b96-e2a4-4505-8500-7f476f36f799-kube-api-access-fzxnl" (OuterVolumeSpecName: "kube-api-access-fzxnl") pod "57fc4b96-e2a4-4505-8500-7f476f36f799" (UID: "57fc4b96-e2a4-4505-8500-7f476f36f799"). InnerVolumeSpecName "kube-api-access-fzxnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.782231 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-config" (OuterVolumeSpecName: "config") pod "a6929b69-85c9-4084-9ff5-4e3a6af602dd" (UID: "a6929b69-85c9-4084-9ff5-4e3a6af602dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.827938 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzxnl\" (UniqueName: \"kubernetes.io/projected/57fc4b96-e2a4-4505-8500-7f476f36f799-kube-api-access-fzxnl\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.828006 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.828208 4814 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.828226 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.828236 4814 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/57fc4b96-e2a4-4505-8500-7f476f36f799-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.828247 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.828260 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.906325 4814 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.931112 4814 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.959870 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-config-data" (OuterVolumeSpecName: "config-data") pod "57fc4b96-e2a4-4505-8500-7f476f36f799" (UID: "57fc4b96-e2a4-4505-8500-7f476f36f799"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:42 crc kubenswrapper[4814]: I0216 10:07:42.982964 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a6929b69-85c9-4084-9ff5-4e3a6af602dd" (UID: "a6929b69-85c9-4084-9ff5-4e3a6af602dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.034706 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.049587 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6929b69-85c9-4084-9ff5-4e3a6af602dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.118065 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.118906 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57fc4b96-e2a4-4505-8500-7f476f36f799" (UID: "57fc4b96-e2a4-4505-8500-7f476f36f799"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.123861 4814 generic.go:334] "Generic (PLEG): container finished" podID="3873abc6-2d46-4624-84d9-b53559d1d83f" containerID="463ffe3a7b77654af3da62559b1b0e57d6037c101701a089cc6669b152c63ff3" exitCode=1 Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.127943 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.160167 4814 generic.go:334] "Generic (PLEG): container finished" podID="278f75a7-f7ec-4e83-9c09-83ceb414b5a0" containerID="5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a" exitCode=0 Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.163518 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fc4b96-e2a4-4505-8500-7f476f36f799-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.262699 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a is running failed: container process not found" containerID="5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.266460 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a is running failed: container process not found" containerID="5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.277811 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a is running failed: container process not found" containerID="5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.277903 4814 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="278f75a7-f7ec-4e83-9c09-83ceb414b5a0" containerName="watcher-applier" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.338515 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"57fc4b96-e2a4-4505-8500-7f476f36f799","Type":"ContainerDied","Data":"abd3db39e886c6ba30c5622e4cf6c4942c951a85efb140f018241ca52f2ee5df"} Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.338579 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3873abc6-2d46-4624-84d9-b53559d1d83f","Type":"ContainerDied","Data":"463ffe3a7b77654af3da62559b1b0e57d6037c101701a089cc6669b152c63ff3"} Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.338595 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" event={"ID":"a6929b69-85c9-4084-9ff5-4e3a6af602dd","Type":"ContainerDied","Data":"9a39520aeee0ce611da85ecce2e7bebf74c86f9eede4a3ce54800fb878184794"} Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.338608 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"83886ba5-6048-49e2-9750-add1874f5929","Type":"ContainerDied","Data":"f7c4f364d9bf71f119df215a019fa9d17b80a091e30c563b51c82c70e8175511"} Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.338621 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7c4f364d9bf71f119df215a019fa9d17b80a091e30c563b51c82c70e8175511" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.338630 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-j4vhw" event={"ID":"a350ef7d-4057-40fd-807d-5b29d2b3b465","Type":"ContainerDied","Data":"6ba09cfbb594a7ad68f727eb63f4a74dbd0668d8c11d518bbae86aa7961d886e"} Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.338641 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ba09cfbb594a7ad68f727eb63f4a74dbd0668d8c11d518bbae86aa7961d886e" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.338648 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-47nvw" event={"ID":"e57a813a-2457-4800-8eef-a91c409659f3","Type":"ContainerStarted","Data":"30319370de6609a922739d32ec09f9f87f94658ae16fb92274c415dc0a46e20f"} Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.338660 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"278f75a7-f7ec-4e83-9c09-83ceb414b5a0","Type":"ContainerDied","Data":"5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a"} Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.338672 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c86bfb21-74b9-406a-ae57-635d5ee7e5fd","Type":"ContainerStarted","Data":"00e8670591c5639d8864385e69b9b98adfd6563750769cb0d2dc36b57a4eda07"} Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.340966 4814 scope.go:117] "RemoveContainer" containerID="3cb3e7efb8fc7a462d7b249d5b36781056b147b9ccd74858ca936a699c4d9b08" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.387595 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-47nvw" podStartSLOduration=3.916068484 podStartE2EDuration="55.387552618s" podCreationTimestamp="2026-02-16 10:06:48 +0000 UTC" firstStartedPulling="2026-02-16 10:06:50.423166591 +0000 UTC m=+1268.116322771" lastFinishedPulling="2026-02-16 10:07:41.894650725 +0000 UTC m=+1319.587806905" observedRunningTime="2026-02-16 10:07:43.362702239 +0000 UTC m=+1321.055858439" watchObservedRunningTime="2026-02-16 10:07:43.387552618 +0000 UTC m=+1321.080708798" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.392416 4814 scope.go:117] "RemoveContainer" containerID="78c61bfa9f0841c861e3f8a229dd24d50a8b3b141c54b4f540b9cfc6303b503b" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.404303 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.462889 4814 scope.go:117] "RemoveContainer" containerID="8136a7b9e0d23176422f169ef301f208938d614ee67b9aa08097ffa2eea1bc17" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.537950 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.572461 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-logs\") pod \"83886ba5-6048-49e2-9750-add1874f5929\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.572586 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-combined-ca-bundle\") pod \"83886ba5-6048-49e2-9750-add1874f5929\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.572791 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtfkd\" (UniqueName: \"kubernetes.io/projected/83886ba5-6048-49e2-9750-add1874f5929-kube-api-access-jtfkd\") pod \"83886ba5-6048-49e2-9750-add1874f5929\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.572889 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-scripts\") pod \"83886ba5-6048-49e2-9750-add1874f5929\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.572977 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-config-data\") pod \"83886ba5-6048-49e2-9750-add1874f5929\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.573017 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"83886ba5-6048-49e2-9750-add1874f5929\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.573035 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-httpd-run\") pod \"83886ba5-6048-49e2-9750-add1874f5929\" (UID: \"83886ba5-6048-49e2-9750-add1874f5929\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.573129 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-logs" (OuterVolumeSpecName: "logs") pod "83886ba5-6048-49e2-9750-add1874f5929" (UID: "83886ba5-6048-49e2-9750-add1874f5929"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.573512 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.578119 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "83886ba5-6048-49e2-9750-add1874f5929" (UID: "83886ba5-6048-49e2-9750-add1874f5929"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.592896 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-scripts" (OuterVolumeSpecName: "scripts") pod "83886ba5-6048-49e2-9750-add1874f5929" (UID: "83886ba5-6048-49e2-9750-add1874f5929"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.593246 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83886ba5-6048-49e2-9750-add1874f5929-kube-api-access-jtfkd" (OuterVolumeSpecName: "kube-api-access-jtfkd") pod "83886ba5-6048-49e2-9750-add1874f5929" (UID: "83886ba5-6048-49e2-9750-add1874f5929"). InnerVolumeSpecName "kube-api-access-jtfkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.597703 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "83886ba5-6048-49e2-9750-add1874f5929" (UID: "83886ba5-6048-49e2-9750-add1874f5929"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.620786 4814 scope.go:117] "RemoveContainer" containerID="c182a7239b640c357a29829a6eedac2d3178459acaca2202a7a8c5071cebd7d4" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.647666 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83886ba5-6048-49e2-9750-add1874f5929" (UID: "83886ba5-6048-49e2-9750-add1874f5929"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.673282 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-config-data" (OuterVolumeSpecName: "config-data") pod "83886ba5-6048-49e2-9750-add1874f5929" (UID: "83886ba5-6048-49e2-9750-add1874f5929"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.674442 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-combined-ca-bundle\") pod \"a350ef7d-4057-40fd-807d-5b29d2b3b465\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.674599 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvfm2\" (UniqueName: \"kubernetes.io/projected/a350ef7d-4057-40fd-807d-5b29d2b3b465-kube-api-access-vvfm2\") pod \"a350ef7d-4057-40fd-807d-5b29d2b3b465\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.674656 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-credential-keys\") pod \"a350ef7d-4057-40fd-807d-5b29d2b3b465\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.674701 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-fernet-keys\") pod \"a350ef7d-4057-40fd-807d-5b29d2b3b465\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.674737 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-scripts\") pod \"a350ef7d-4057-40fd-807d-5b29d2b3b465\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.674781 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-config-data\") pod \"a350ef7d-4057-40fd-807d-5b29d2b3b465\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.680293 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtfkd\" (UniqueName: \"kubernetes.io/projected/83886ba5-6048-49e2-9750-add1874f5929-kube-api-access-jtfkd\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.680375 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.680394 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.680463 4814 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.680478 4814 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/83886ba5-6048-49e2-9750-add1874f5929-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.680495 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83886ba5-6048-49e2-9750-add1874f5929-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.680806 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a350ef7d-4057-40fd-807d-5b29d2b3b465" (UID: "a350ef7d-4057-40fd-807d-5b29d2b3b465"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.685103 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-scripts" (OuterVolumeSpecName: "scripts") pod "a350ef7d-4057-40fd-807d-5b29d2b3b465" (UID: "a350ef7d-4057-40fd-807d-5b29d2b3b465"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.685185 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a350ef7d-4057-40fd-807d-5b29d2b3b465" (UID: "a350ef7d-4057-40fd-807d-5b29d2b3b465"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.695192 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a350ef7d-4057-40fd-807d-5b29d2b3b465-kube-api-access-vvfm2" (OuterVolumeSpecName: "kube-api-access-vvfm2") pod "a350ef7d-4057-40fd-807d-5b29d2b3b465" (UID: "a350ef7d-4057-40fd-807d-5b29d2b3b465"). InnerVolumeSpecName "kube-api-access-vvfm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.710387 4814 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.712518 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-combined-ca-bundle podName:a350ef7d-4057-40fd-807d-5b29d2b3b465 nodeName:}" failed. No retries permitted until 2026-02-16 10:07:44.212476977 +0000 UTC m=+1321.905633327 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-combined-ca-bundle") pod "a350ef7d-4057-40fd-807d-5b29d2b3b465" (UID: "a350ef7d-4057-40fd-807d-5b29d2b3b465") : error deleting /var/lib/kubelet/pods/a350ef7d-4057-40fd-807d-5b29d2b3b465/volume-subpaths: remove /var/lib/kubelet/pods/a350ef7d-4057-40fd-807d-5b29d2b3b465/volume-subpaths: no such file or directory Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.716357 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-config-data" (OuterVolumeSpecName: "config-data") pod "a350ef7d-4057-40fd-807d-5b29d2b3b465" (UID: "a350ef7d-4057-40fd-807d-5b29d2b3b465"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.788769 4814 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.788808 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvfm2\" (UniqueName: \"kubernetes.io/projected/a350ef7d-4057-40fd-807d-5b29d2b3b465-kube-api-access-vvfm2\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.788821 4814 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.788830 4814 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.788839 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.788849 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.860698 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.937408 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.948161 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.964843 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.980561 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8lvv6" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.990754 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991235 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a350ef7d-4057-40fd-807d-5b29d2b3b465" containerName="keystone-bootstrap" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991255 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a350ef7d-4057-40fd-807d-5b29d2b3b465" containerName="keystone-bootstrap" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991272 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83886ba5-6048-49e2-9750-add1874f5929" containerName="glance-log" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991278 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="83886ba5-6048-49e2-9750-add1874f5929" containerName="glance-log" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991285 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fc4b96-e2a4-4505-8500-7f476f36f799" containerName="glance-httpd" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991291 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fc4b96-e2a4-4505-8500-7f476f36f799" containerName="glance-httpd" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991305 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83886ba5-6048-49e2-9750-add1874f5929" containerName="glance-httpd" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991310 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="83886ba5-6048-49e2-9750-add1874f5929" containerName="glance-httpd" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991320 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278f75a7-f7ec-4e83-9c09-83ceb414b5a0" containerName="watcher-applier" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991326 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="278f75a7-f7ec-4e83-9c09-83ceb414b5a0" containerName="watcher-applier" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991358 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2ed9b7-2884-4466-a5b3-f09640444423" containerName="init" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991366 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2ed9b7-2884-4466-a5b3-f09640444423" containerName="init" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991378 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f2ed9b7-2884-4466-a5b3-f09640444423" containerName="dnsmasq-dns" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991385 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f2ed9b7-2884-4466-a5b3-f09640444423" containerName="dnsmasq-dns" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991398 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fc4b96-e2a4-4505-8500-7f476f36f799" containerName="glance-log" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991404 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fc4b96-e2a4-4505-8500-7f476f36f799" containerName="glance-log" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991414 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a3e754-132c-4c4e-9593-91ca3f391363" containerName="placement-db-sync" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991420 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a3e754-132c-4c4e-9593-91ca3f391363" containerName="placement-db-sync" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991429 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6929b69-85c9-4084-9ff5-4e3a6af602dd" containerName="dnsmasq-dns" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991500 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6929b69-85c9-4084-9ff5-4e3a6af602dd" containerName="dnsmasq-dns" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991513 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3873abc6-2d46-4624-84d9-b53559d1d83f" containerName="watcher-decision-engine" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991521 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3873abc6-2d46-4624-84d9-b53559d1d83f" containerName="watcher-decision-engine" Feb 16 10:07:43 crc kubenswrapper[4814]: E0216 10:07:43.991560 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6929b69-85c9-4084-9ff5-4e3a6af602dd" containerName="init" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991568 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6929b69-85c9-4084-9ff5-4e3a6af602dd" containerName="init" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991854 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="83886ba5-6048-49e2-9750-add1874f5929" containerName="glance-log" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991868 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a3e754-132c-4c4e-9593-91ca3f391363" containerName="placement-db-sync" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991878 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="3873abc6-2d46-4624-84d9-b53559d1d83f" containerName="watcher-decision-engine" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991890 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="278f75a7-f7ec-4e83-9c09-83ceb414b5a0" containerName="watcher-applier" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991896 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a350ef7d-4057-40fd-807d-5b29d2b3b465" containerName="keystone-bootstrap" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991911 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6929b69-85c9-4084-9ff5-4e3a6af602dd" containerName="dnsmasq-dns" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991922 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fc4b96-e2a4-4505-8500-7f476f36f799" containerName="glance-log" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991935 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fc4b96-e2a4-4505-8500-7f476f36f799" containerName="glance-httpd" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991942 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f2ed9b7-2884-4466-a5b3-f09640444423" containerName="dnsmasq-dns" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.991955 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="83886ba5-6048-49e2-9750-add1874f5929" containerName="glance-httpd" Feb 16 10:07:43 crc kubenswrapper[4814]: I0216 10:07:43.997817 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.006641 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8psq\" (UniqueName: \"kubernetes.io/projected/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-kube-api-access-h8psq\") pod \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.006884 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-logs\") pod \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.006943 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-config-data\") pod \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.006973 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-combined-ca-bundle\") pod \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\" (UID: \"278f75a7-f7ec-4e83-9c09-83ceb414b5a0\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.007864 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-logs" (OuterVolumeSpecName: "logs") pod "278f75a7-f7ec-4e83-9c09-83ceb414b5a0" (UID: "278f75a7-f7ec-4e83-9c09-83ceb414b5a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.009151 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.011436 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.023709 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.058350 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-kube-api-access-h8psq" (OuterVolumeSpecName: "kube-api-access-h8psq") pod "278f75a7-f7ec-4e83-9c09-83ceb414b5a0" (UID: "278f75a7-f7ec-4e83-9c09-83ceb414b5a0"). InnerVolumeSpecName "kube-api-access-h8psq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.116147 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-config-data\") pod \"3873abc6-2d46-4624-84d9-b53559d1d83f\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.116208 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc9vj\" (UniqueName: \"kubernetes.io/projected/3873abc6-2d46-4624-84d9-b53559d1d83f-kube-api-access-mc9vj\") pod \"3873abc6-2d46-4624-84d9-b53559d1d83f\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.116297 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3873abc6-2d46-4624-84d9-b53559d1d83f-logs\") pod \"3873abc6-2d46-4624-84d9-b53559d1d83f\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.116345 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a3e754-132c-4c4e-9593-91ca3f391363-logs\") pod \"e5a3e754-132c-4c4e-9593-91ca3f391363\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.116446 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-combined-ca-bundle\") pod \"3873abc6-2d46-4624-84d9-b53559d1d83f\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.116513 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-combined-ca-bundle\") pod \"e5a3e754-132c-4c4e-9593-91ca3f391363\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.116561 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffzz8\" (UniqueName: \"kubernetes.io/projected/e5a3e754-132c-4c4e-9593-91ca3f391363-kube-api-access-ffzz8\") pod \"e5a3e754-132c-4c4e-9593-91ca3f391363\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.116611 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-config-data\") pod \"e5a3e754-132c-4c4e-9593-91ca3f391363\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.116669 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-custom-prometheus-ca\") pod \"3873abc6-2d46-4624-84d9-b53559d1d83f\" (UID: \"3873abc6-2d46-4624-84d9-b53559d1d83f\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.116738 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-scripts\") pod \"e5a3e754-132c-4c4e-9593-91ca3f391363\" (UID: \"e5a3e754-132c-4c4e-9593-91ca3f391363\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.117479 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.117516 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.117566 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-logs\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.117648 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.117770 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.117838 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.117874 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqk6d\" (UniqueName: \"kubernetes.io/projected/53476134-c469-4492-8ac7-3f2ed6a87247-kube-api-access-cqk6d\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.117941 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.118117 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.118132 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8psq\" (UniqueName: \"kubernetes.io/projected/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-kube-api-access-h8psq\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.122039 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3873abc6-2d46-4624-84d9-b53559d1d83f-kube-api-access-mc9vj" (OuterVolumeSpecName: "kube-api-access-mc9vj") pod "3873abc6-2d46-4624-84d9-b53559d1d83f" (UID: "3873abc6-2d46-4624-84d9-b53559d1d83f"). InnerVolumeSpecName "kube-api-access-mc9vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.124565 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3873abc6-2d46-4624-84d9-b53559d1d83f-logs" (OuterVolumeSpecName: "logs") pod "3873abc6-2d46-4624-84d9-b53559d1d83f" (UID: "3873abc6-2d46-4624-84d9-b53559d1d83f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.124852 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a3e754-132c-4c4e-9593-91ca3f391363-logs" (OuterVolumeSpecName: "logs") pod "e5a3e754-132c-4c4e-9593-91ca3f391363" (UID: "e5a3e754-132c-4c4e-9593-91ca3f391363"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.142281 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-scripts" (OuterVolumeSpecName: "scripts") pod "e5a3e754-132c-4c4e-9593-91ca3f391363" (UID: "e5a3e754-132c-4c4e-9593-91ca3f391363"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.149217 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a3e754-132c-4c4e-9593-91ca3f391363-kube-api-access-ffzz8" (OuterVolumeSpecName: "kube-api-access-ffzz8") pod "e5a3e754-132c-4c4e-9593-91ca3f391363" (UID: "e5a3e754-132c-4c4e-9593-91ca3f391363"). InnerVolumeSpecName "kube-api-access-ffzz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.176668 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-config-data" (OuterVolumeSpecName: "config-data") pod "278f75a7-f7ec-4e83-9c09-83ceb414b5a0" (UID: "278f75a7-f7ec-4e83-9c09-83ceb414b5a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.191250 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"278f75a7-f7ec-4e83-9c09-83ceb414b5a0","Type":"ContainerDied","Data":"a1cd53bd595c768f37ac9e87fd298902a74caa002c83887a5e5be88398472a56"} Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.191338 4814 scope.go:117] "RemoveContainer" containerID="5814015aa4b3d1a2da05929ccd8174f1b8e5041bf6c23a712394101e3b27691a" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.191612 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.202938 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p4vk6" event={"ID":"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4","Type":"ContainerStarted","Data":"e07ce1328e93a7aaf324f7ec49fc13b10732c980b355e901bff56ff144383dd7"} Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.210012 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3873abc6-2d46-4624-84d9-b53559d1d83f" (UID: "3873abc6-2d46-4624-84d9-b53559d1d83f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.214883 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.215520 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3873abc6-2d46-4624-84d9-b53559d1d83f","Type":"ContainerDied","Data":"0031a408206e05eafc57201df02ab1b45792d4d9c15ae470431e885255de81aa"} Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.221845 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-combined-ca-bundle\") pod \"a350ef7d-4057-40fd-807d-5b29d2b3b465\" (UID: \"a350ef7d-4057-40fd-807d-5b29d2b3b465\") " Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.222422 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.222500 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.222551 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqk6d\" (UniqueName: \"kubernetes.io/projected/53476134-c469-4492-8ac7-3f2ed6a87247-kube-api-access-cqk6d\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.222623 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.222732 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.222768 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.222817 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-logs\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.222894 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.223008 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.223030 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc9vj\" (UniqueName: \"kubernetes.io/projected/3873abc6-2d46-4624-84d9-b53559d1d83f-kube-api-access-mc9vj\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.223045 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3873abc6-2d46-4624-84d9-b53559d1d83f-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.223057 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a3e754-132c-4c4e-9593-91ca3f391363-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.223072 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffzz8\" (UniqueName: \"kubernetes.io/projected/e5a3e754-132c-4c4e-9593-91ca3f391363-kube-api-access-ffzz8\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.223090 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.223103 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.227362 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.231310 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-logs\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.231775 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.233375 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.237600 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a350ef7d-4057-40fd-807d-5b29d2b3b465" (UID: "a350ef7d-4057-40fd-807d-5b29d2b3b465"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.238723 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3873abc6-2d46-4624-84d9-b53559d1d83f" (UID: "3873abc6-2d46-4624-84d9-b53559d1d83f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.240216 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.242819 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "278f75a7-f7ec-4e83-9c09-83ceb414b5a0" (UID: "278f75a7-f7ec-4e83-9c09-83ceb414b5a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.243612 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-p4vk6" podStartSLOduration=5.739140508 podStartE2EDuration="57.243585603s" podCreationTimestamp="2026-02-16 10:06:47 +0000 UTC" firstStartedPulling="2026-02-16 10:06:50.408805784 +0000 UTC m=+1268.101961954" lastFinishedPulling="2026-02-16 10:07:41.913250869 +0000 UTC m=+1319.606407049" observedRunningTime="2026-02-16 10:07:44.238168904 +0000 UTC m=+1321.931325104" watchObservedRunningTime="2026-02-16 10:07:44.243585603 +0000 UTC m=+1321.936741793" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.244868 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-config-data" (OuterVolumeSpecName: "config-data") pod "e5a3e754-132c-4c4e-9593-91ca3f391363" (UID: "e5a3e754-132c-4c4e-9593-91ca3f391363"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.245920 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-j4vhw" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.247277 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.247524 4814 scope.go:117] "RemoveContainer" containerID="463ffe3a7b77654af3da62559b1b0e57d6037c101701a089cc6669b152c63ff3" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.247922 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8lvv6" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.248554 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8lvv6" event={"ID":"e5a3e754-132c-4c4e-9593-91ca3f391363","Type":"ContainerDied","Data":"db0959f756a3be14d65b73358850081c17f2470f207760f2b0bf1700427de0de"} Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.248592 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db0959f756a3be14d65b73358850081c17f2470f207760f2b0bf1700427de0de" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.248654 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.251113 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqk6d\" (UniqueName: \"kubernetes.io/projected/53476134-c469-4492-8ac7-3f2ed6a87247-kube-api-access-cqk6d\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.261316 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.297151 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5a3e754-132c-4c4e-9593-91ca3f391363" (UID: "e5a3e754-132c-4c4e-9593-91ca3f391363"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.301784 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-config-data" (OuterVolumeSpecName: "config-data") pod "3873abc6-2d46-4624-84d9-b53559d1d83f" (UID: "3873abc6-2d46-4624-84d9-b53559d1d83f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.312328 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.313972 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.328110 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.328895 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/278f75a7-f7ec-4e83-9c09-83ceb414b5a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.329015 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.329093 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3873abc6-2d46-4624-84d9-b53559d1d83f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.329162 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.329251 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a3e754-132c-4c4e-9593-91ca3f391363-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.329322 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a350ef7d-4057-40fd-807d-5b29d2b3b465-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.341026 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.345941 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.346187 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.349479 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.350173 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.390801 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.405237 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-846869f756-srzgg"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.411682 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.423711 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.423947 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.427144 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-846869f756-srzgg"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.431369 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bcqn\" (UniqueName: \"kubernetes.io/projected/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-kube-api-access-8bcqn\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.431423 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.431525 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-config-data\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.431604 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.431684 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-scripts\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.431882 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.431981 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.432061 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-logs\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535269 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535378 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-config-data\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535407 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535450 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-config-data\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535491 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-scripts\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535557 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d18d3d4-1253-4949-a6b0-42c6bd32b340-logs\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535644 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-scripts\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535681 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-public-tls-certs\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535708 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535744 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-combined-ca-bundle\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535775 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snbvl\" (UniqueName: \"kubernetes.io/projected/6d18d3d4-1253-4949-a6b0-42c6bd32b340-kube-api-access-snbvl\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535808 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535858 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-logs\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.535958 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-internal-tls-certs\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.536005 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bcqn\" (UniqueName: \"kubernetes.io/projected/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-kube-api-access-8bcqn\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.536583 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.544739 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.546174 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.549048 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.549656 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-scripts\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.550309 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.552284 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-logs\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.571665 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-config-data\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.638452 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-internal-tls-certs\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.639000 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-config-data\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.639071 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d18d3d4-1253-4949-a6b0-42c6bd32b340-logs\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.639139 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-scripts\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.639168 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-public-tls-certs\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.639215 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-combined-ca-bundle\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.639236 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snbvl\" (UniqueName: \"kubernetes.io/projected/6d18d3d4-1253-4949-a6b0-42c6bd32b340-kube-api-access-snbvl\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.664777 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bcqn\" (UniqueName: \"kubernetes.io/projected/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-kube-api-access-8bcqn\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.665482 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d18d3d4-1253-4949-a6b0-42c6bd32b340-logs\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.672158 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-combined-ca-bundle\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.673343 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snbvl\" (UniqueName: \"kubernetes.io/projected/6d18d3d4-1253-4949-a6b0-42c6bd32b340-kube-api-access-snbvl\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.686316 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.701972 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-internal-tls-certs\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.702255 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-config-data\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.702758 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-public-tls-certs\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.707645 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-scripts\") pod \"placement-846869f756-srzgg\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.726450 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.746363 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.749764 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.759985 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.760342 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.772332 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.795247 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.816684 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.845732 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.845831 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-logs\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.845875 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9qqv\" (UniqueName: \"kubernetes.io/projected/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-kube-api-access-p9qqv\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.846126 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-config-data\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.910277 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.913704 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.916246 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.954210 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.963961 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.964072 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-logs\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.967084 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-logs\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.972668 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.975068 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9qqv\" (UniqueName: \"kubernetes.io/projected/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-kube-api-access-p9qqv\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.975493 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-config-data\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.977040 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8669966799-gwc6g"] Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.979123 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.981270 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.989097 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.989376 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.989587 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7vv8q" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.989951 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.990083 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 10:07:44 crc kubenswrapper[4814]: I0216 10:07:44.990302 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.007722 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-config-data\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.077301 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9qqv\" (UniqueName: \"kubernetes.io/projected/f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2-kube-api-access-p9qqv\") pod \"watcher-applier-0\" (UID: \"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2\") " pod="openstack/watcher-applier-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.227750 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-fernet-keys\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228322 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228473 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-combined-ca-bundle\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228548 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6lr4\" (UniqueName: \"kubernetes.io/projected/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-kube-api-access-x6lr4\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228584 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-config-data\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228638 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-credential-keys\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228687 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfvc\" (UniqueName: \"kubernetes.io/projected/88895e94-c6c9-4622-b6eb-94982898ac2b-kube-api-access-7kfvc\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228734 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-public-tls-certs\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228781 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228817 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-internal-tls-certs\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228872 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-scripts\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228936 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.228986 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88895e94-c6c9-4622-b6eb-94982898ac2b-logs\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.261341 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="278f75a7-f7ec-4e83-9c09-83ceb414b5a0" path="/var/lib/kubelet/pods/278f75a7-f7ec-4e83-9c09-83ceb414b5a0/volumes" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.262352 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3873abc6-2d46-4624-84d9-b53559d1d83f" path="/var/lib/kubelet/pods/3873abc6-2d46-4624-84d9-b53559d1d83f/volumes" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.263038 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57fc4b96-e2a4-4505-8500-7f476f36f799" path="/var/lib/kubelet/pods/57fc4b96-e2a4-4505-8500-7f476f36f799/volumes" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.270955 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83886ba5-6048-49e2-9750-add1874f5929" path="/var/lib/kubelet/pods/83886ba5-6048-49e2-9750-add1874f5929/volumes" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.280248 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8669966799-gwc6g"] Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334326 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334396 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88895e94-c6c9-4622-b6eb-94982898ac2b-logs\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334463 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-fernet-keys\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334494 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334558 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-combined-ca-bundle\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334593 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6lr4\" (UniqueName: \"kubernetes.io/projected/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-kube-api-access-x6lr4\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334616 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-config-data\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334658 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-credential-keys\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334688 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kfvc\" (UniqueName: \"kubernetes.io/projected/88895e94-c6c9-4622-b6eb-94982898ac2b-kube-api-access-7kfvc\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334746 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-public-tls-certs\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334776 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334798 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-internal-tls-certs\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.334868 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-scripts\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.335195 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88895e94-c6c9-4622-b6eb-94982898ac2b-logs\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.357942 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-credential-keys\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.359997 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-scripts\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.361449 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-config-data\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.361976 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-public-tls-certs\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.362257 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.362515 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.365853 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-internal-tls-certs\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.367645 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-combined-ca-bundle\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.368159 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-fernet-keys\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.371341 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.384736 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.386403 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6lr4\" (UniqueName: \"kubernetes.io/projected/d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6-kube-api-access-x6lr4\") pod \"keystone-8669966799-gwc6g\" (UID: \"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6\") " pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.402205 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kfvc\" (UniqueName: \"kubernetes.io/projected/88895e94-c6c9-4622-b6eb-94982898ac2b-kube-api-access-7kfvc\") pod \"watcher-decision-engine-0\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.502555 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.536252 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.620055 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.655494 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-846869f756-srzgg"] Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.779240 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 16 10:07:45 crc kubenswrapper[4814]: I0216 10:07:45.848820 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 16 10:07:46 crc kubenswrapper[4814]: I0216 10:07:46.204944 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:07:46 crc kubenswrapper[4814]: I0216 10:07:46.340106 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846869f756-srzgg" event={"ID":"6d18d3d4-1253-4949-a6b0-42c6bd32b340","Type":"ContainerStarted","Data":"5799022d0c8ccf79e97649148c97ab28d222b778afddac02877f6a5d0955d80b"} Feb 16 10:07:46 crc kubenswrapper[4814]: I0216 10:07:46.355658 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53476134-c469-4492-8ac7-3f2ed6a87247","Type":"ContainerStarted","Data":"c7ef1637cbc168dfe8eadfe06c3237ca297bfecf1b2b6de56fa4183d63209b96"} Feb 16 10:07:46 crc kubenswrapper[4814]: W0216 10:07:46.368456 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6af939f_d1dd_44b1_b0c0_a52f27bf6f23.slice/crio-9812a6c36951790f04b24ce685fb51f0ff264aca3b233daffaa51329ac5a37e7 WatchSource:0}: Error finding container 9812a6c36951790f04b24ce685fb51f0ff264aca3b233daffaa51329ac5a37e7: Status 404 returned error can't find the container with id 9812a6c36951790f04b24ce685fb51f0ff264aca3b233daffaa51329ac5a37e7 Feb 16 10:07:46 crc kubenswrapper[4814]: I0216 10:07:46.624728 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 16 10:07:46 crc kubenswrapper[4814]: I0216 10:07:46.676613 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:07:46 crc kubenswrapper[4814]: I0216 10:07:46.847151 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8669966799-gwc6g"] Feb 16 10:07:46 crc kubenswrapper[4814]: W0216 10:07:46.937458 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd79bc6bd_dfc5_4058_a72c_f3d0bf05b8f6.slice/crio-5f0502036ceafa8d3fc60d7822bc44de33a891bb16834defe91008ef7690c148 WatchSource:0}: Error finding container 5f0502036ceafa8d3fc60d7822bc44de33a891bb16834defe91008ef7690c148: Status 404 returned error can't find the container with id 5f0502036ceafa8d3fc60d7822bc44de33a891bb16834defe91008ef7690c148 Feb 16 10:07:47 crc kubenswrapper[4814]: I0216 10:07:47.363580 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f95b74b5b-mpwlg" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 16 10:07:47 crc kubenswrapper[4814]: I0216 10:07:47.386846 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8669966799-gwc6g" event={"ID":"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6","Type":"ContainerStarted","Data":"5f0502036ceafa8d3fc60d7822bc44de33a891bb16834defe91008ef7690c148"} Feb 16 10:07:47 crc kubenswrapper[4814]: I0216 10:07:47.388169 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23","Type":"ContainerStarted","Data":"9812a6c36951790f04b24ce685fb51f0ff264aca3b233daffaa51329ac5a37e7"} Feb 16 10:07:47 crc kubenswrapper[4814]: I0216 10:07:47.390151 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846869f756-srzgg" event={"ID":"6d18d3d4-1253-4949-a6b0-42c6bd32b340","Type":"ContainerStarted","Data":"fcc2b19f83f34873671e1de09f7da68afd069c43d6617b3b09790de797982cc1"} Feb 16 10:07:47 crc kubenswrapper[4814]: I0216 10:07:47.390185 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846869f756-srzgg" event={"ID":"6d18d3d4-1253-4949-a6b0-42c6bd32b340","Type":"ContainerStarted","Data":"302182948ffc83c2dfcace7fa5feea470265cfe21c390d4ccc310632a6428e7e"} Feb 16 10:07:47 crc kubenswrapper[4814]: I0216 10:07:47.391625 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:47 crc kubenswrapper[4814]: I0216 10:07:47.391649 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-846869f756-srzgg" Feb 16 10:07:47 crc kubenswrapper[4814]: I0216 10:07:47.402333 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2","Type":"ContainerStarted","Data":"af1a67245001a461b714a3aa71c131630705cd4fbfa3c9f0072e04dd8ef7efb0"} Feb 16 10:07:47 crc kubenswrapper[4814]: I0216 10:07:47.410556 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"88895e94-c6c9-4622-b6eb-94982898ac2b","Type":"ContainerStarted","Data":"57a8bafe3a054b766e387beb450d3e3e8020c8ead5debdb497164c6a64af0918"} Feb 16 10:07:47 crc kubenswrapper[4814]: I0216 10:07:47.429047 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-846869f756-srzgg" podStartSLOduration=3.429020893 podStartE2EDuration="3.429020893s" podCreationTimestamp="2026-02-16 10:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:47.425737217 +0000 UTC m=+1325.118893407" watchObservedRunningTime="2026-02-16 10:07:47.429020893 +0000 UTC m=+1325.122177073" Feb 16 10:07:47 crc kubenswrapper[4814]: I0216 10:07:47.875055 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76696f58b-dfzph" podUID="d4064477-94ed-4129-819b-63df1d34d227" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.167:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.167:8443: connect: connection refused" Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.539952 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23","Type":"ContainerStarted","Data":"112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08"} Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.544383 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"88895e94-c6c9-4622-b6eb-94982898ac2b","Type":"ContainerStarted","Data":"d54f086af5398528505ba9cdf4e062ebd7895f53fc59e3268c49fa6186db035f"} Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.570173 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2","Type":"ContainerStarted","Data":"21ae8c05d3f341b085b9c9b7e531e7b4c43dc0b56af7c090d222ee2e324c67d8"} Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.582150 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8669966799-gwc6g" event={"ID":"d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6","Type":"ContainerStarted","Data":"87ed7f4e0a7923132c51d735d571e424093ff093a7422d43c568ef31b8cac6cb"} Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.583232 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.593715 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=4.593689179 podStartE2EDuration="4.593689179s" podCreationTimestamp="2026-02-16 10:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:48.573825461 +0000 UTC m=+1326.266981651" watchObservedRunningTime="2026-02-16 10:07:48.593689179 +0000 UTC m=+1326.286845359" Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.603992 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53476134-c469-4492-8ac7-3f2ed6a87247","Type":"ContainerStarted","Data":"5a047c65ef8377204dda383c58a3ad0482bbee4e87c0c02d4428d19bb842b518"} Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.620305 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=4.620282163 podStartE2EDuration="4.620282163s" podCreationTimestamp="2026-02-16 10:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:48.613084888 +0000 UTC m=+1326.306241068" watchObservedRunningTime="2026-02-16 10:07:48.620282163 +0000 UTC m=+1326.313438343" Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.666260 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8669966799-gwc6g" podStartSLOduration=4.666234335 podStartE2EDuration="4.666234335s" podCreationTimestamp="2026-02-16 10:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:48.639468838 +0000 UTC m=+1326.332625018" watchObservedRunningTime="2026-02-16 10:07:48.666234335 +0000 UTC m=+1326.359390505" Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.873842 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b595588cb-jj9fp"] Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.876184 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:48 crc kubenswrapper[4814]: I0216 10:07:48.932661 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b595588cb-jj9fp"] Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.032024 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-config-data\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.032095 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-combined-ca-bundle\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.032141 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fpvx\" (UniqueName: \"kubernetes.io/projected/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-kube-api-access-4fpvx\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.032294 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-scripts\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.032337 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-internal-tls-certs\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.032571 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-logs\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.032671 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-public-tls-certs\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.138818 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-config-data\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.138959 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-combined-ca-bundle\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.139009 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fpvx\" (UniqueName: \"kubernetes.io/projected/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-kube-api-access-4fpvx\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.139115 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-scripts\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.139140 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-internal-tls-certs\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.139230 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-logs\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.139298 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-public-tls-certs\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.141630 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-logs\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.146466 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-scripts\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.149272 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-internal-tls-certs\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.151705 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-config-data\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.151972 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-public-tls-certs\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.168996 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-combined-ca-bundle\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.183415 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fpvx\" (UniqueName: \"kubernetes.io/projected/a1c41c68-9785-42e3-aba9-ad9b36fc72d8-kube-api-access-4fpvx\") pod \"placement-b595588cb-jj9fp\" (UID: \"a1c41c68-9785-42e3-aba9-ad9b36fc72d8\") " pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:49 crc kubenswrapper[4814]: I0216 10:07:49.267733 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:50 crc kubenswrapper[4814]: I0216 10:07:50.014411 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b595588cb-jj9fp"] Feb 16 10:07:50 crc kubenswrapper[4814]: I0216 10:07:50.386190 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 16 10:07:50 crc kubenswrapper[4814]: I0216 10:07:50.660644 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b595588cb-jj9fp" event={"ID":"a1c41c68-9785-42e3-aba9-ad9b36fc72d8","Type":"ContainerStarted","Data":"2ce548d7aeb22788d8fa5e8eb0d27ce2b8dde5666d644af0f5501c9ffb36f14a"} Feb 16 10:07:50 crc kubenswrapper[4814]: I0216 10:07:50.660717 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b595588cb-jj9fp" event={"ID":"a1c41c68-9785-42e3-aba9-ad9b36fc72d8","Type":"ContainerStarted","Data":"e055f890d92bb688a31c1ca322939cf7f8e63d082dd9c447ceee25fc2f6eae06"} Feb 16 10:07:50 crc kubenswrapper[4814]: I0216 10:07:50.690684 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:07:50 crc kubenswrapper[4814]: I0216 10:07:50.691041 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api-log" containerID="cri-o://a1da293499103e696851c13f46701b761792f85dbe30cea54308a5edbc062afc" gracePeriod=30 Feb 16 10:07:50 crc kubenswrapper[4814]: I0216 10:07:50.691247 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api" containerID="cri-o://6a2a36eea931c3dfde895d8e48f7b2c745464991dcf338c3f79d2fbc6e89c232" gracePeriod=30 Feb 16 10:07:50 crc kubenswrapper[4814]: I0216 10:07:50.695784 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53476134-c469-4492-8ac7-3f2ed6a87247","Type":"ContainerStarted","Data":"53dbded13800fb5aa93db6abfae70bc8e14a2f2f83ff1b4104bc25f7198d3a54"} Feb 16 10:07:50 crc kubenswrapper[4814]: I0216 10:07:50.725452 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23","Type":"ContainerStarted","Data":"f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6"} Feb 16 10:07:50 crc kubenswrapper[4814]: I0216 10:07:50.768477 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.768448762 podStartE2EDuration="7.768448762s" podCreationTimestamp="2026-02-16 10:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:50.747315728 +0000 UTC m=+1328.440471908" watchObservedRunningTime="2026-02-16 10:07:50.768448762 +0000 UTC m=+1328.461604942" Feb 16 10:07:50 crc kubenswrapper[4814]: I0216 10:07:50.825345 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.825318013 podStartE2EDuration="6.825318013s" podCreationTimestamp="2026-02-16 10:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:50.815569698 +0000 UTC m=+1328.508725878" watchObservedRunningTime="2026-02-16 10:07:50.825318013 +0000 UTC m=+1328.518474193" Feb 16 10:07:51 crc kubenswrapper[4814]: I0216 10:07:51.744296 4814 generic.go:334] "Generic (PLEG): container finished" podID="5911fde8-d13a-4c6a-941e-e25515983484" containerID="a1da293499103e696851c13f46701b761792f85dbe30cea54308a5edbc062afc" exitCode=143 Feb 16 10:07:51 crc kubenswrapper[4814]: I0216 10:07:51.744379 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5911fde8-d13a-4c6a-941e-e25515983484","Type":"ContainerDied","Data":"a1da293499103e696851c13f46701b761792f85dbe30cea54308a5edbc062afc"} Feb 16 10:07:51 crc kubenswrapper[4814]: I0216 10:07:51.746822 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b595588cb-jj9fp" event={"ID":"a1c41c68-9785-42e3-aba9-ad9b36fc72d8","Type":"ContainerStarted","Data":"689b8779459a3c73b16a4e0e4666d10788e5381fcabee604f8001e80a492b2b4"} Feb 16 10:07:51 crc kubenswrapper[4814]: I0216 10:07:51.747171 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:51 crc kubenswrapper[4814]: I0216 10:07:51.747546 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:07:51 crc kubenswrapper[4814]: I0216 10:07:51.775129 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-b595588cb-jj9fp" podStartSLOduration=3.775100979 podStartE2EDuration="3.775100979s" podCreationTimestamp="2026-02-16 10:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:07:51.768302833 +0000 UTC m=+1329.461459013" watchObservedRunningTime="2026-02-16 10:07:51.775100979 +0000 UTC m=+1329.468257159" Feb 16 10:07:52 crc kubenswrapper[4814]: I0216 10:07:52.595379 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9322/\": read tcp 10.217.0.2:36142->10.217.0.168:9322: read: connection reset by peer" Feb 16 10:07:52 crc kubenswrapper[4814]: I0216 10:07:52.595379 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.168:9322/\": read tcp 10.217.0.2:36134->10.217.0.168:9322: read: connection reset by peer" Feb 16 10:07:52 crc kubenswrapper[4814]: I0216 10:07:52.763007 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5911fde8-d13a-4c6a-941e-e25515983484","Type":"ContainerDied","Data":"6a2a36eea931c3dfde895d8e48f7b2c745464991dcf338c3f79d2fbc6e89c232"} Feb 16 10:07:52 crc kubenswrapper[4814]: I0216 10:07:52.762955 4814 generic.go:334] "Generic (PLEG): container finished" podID="5911fde8-d13a-4c6a-941e-e25515983484" containerID="6a2a36eea931c3dfde895d8e48f7b2c745464991dcf338c3f79d2fbc6e89c232" exitCode=0 Feb 16 10:07:53 crc kubenswrapper[4814]: I0216 10:07:53.796936 4814 generic.go:334] "Generic (PLEG): container finished" podID="e57a813a-2457-4800-8eef-a91c409659f3" containerID="30319370de6609a922739d32ec09f9f87f94658ae16fb92274c415dc0a46e20f" exitCode=0 Feb 16 10:07:53 crc kubenswrapper[4814]: I0216 10:07:53.797051 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-47nvw" event={"ID":"e57a813a-2457-4800-8eef-a91c409659f3","Type":"ContainerDied","Data":"30319370de6609a922739d32ec09f9f87f94658ae16fb92274c415dc0a46e20f"} Feb 16 10:07:53 crc kubenswrapper[4814]: I0216 10:07:53.800471 4814 generic.go:334] "Generic (PLEG): container finished" podID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerID="d54f086af5398528505ba9cdf4e062ebd7895f53fc59e3268c49fa6186db035f" exitCode=1 Feb 16 10:07:53 crc kubenswrapper[4814]: I0216 10:07:53.800575 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"88895e94-c6c9-4622-b6eb-94982898ac2b","Type":"ContainerDied","Data":"d54f086af5398528505ba9cdf4e062ebd7895f53fc59e3268c49fa6186db035f"} Feb 16 10:07:53 crc kubenswrapper[4814]: I0216 10:07:53.803010 4814 scope.go:117] "RemoveContainer" containerID="d54f086af5398528505ba9cdf4e062ebd7895f53fc59e3268c49fa6186db035f" Feb 16 10:07:54 crc kubenswrapper[4814]: I0216 10:07:54.347436 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:54 crc kubenswrapper[4814]: I0216 10:07:54.348726 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:54 crc kubenswrapper[4814]: I0216 10:07:54.395058 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:54 crc kubenswrapper[4814]: I0216 10:07:54.397100 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:54 crc kubenswrapper[4814]: I0216 10:07:54.826777 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:54 crc kubenswrapper[4814]: I0216 10:07:54.826820 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:54 crc kubenswrapper[4814]: I0216 10:07:54.986799 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 10:07:54 crc kubenswrapper[4814]: I0216 10:07:54.986884 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 10:07:55 crc kubenswrapper[4814]: I0216 10:07:55.167627 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 10:07:55 crc kubenswrapper[4814]: I0216 10:07:55.220230 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 10:07:55 crc kubenswrapper[4814]: I0216 10:07:55.385603 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 16 10:07:55 crc kubenswrapper[4814]: I0216 10:07:55.429531 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 16 10:07:55 crc kubenswrapper[4814]: I0216 10:07:55.503246 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 10:07:55 crc kubenswrapper[4814]: I0216 10:07:55.503313 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 10:07:55 crc kubenswrapper[4814]: I0216 10:07:55.756152 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.168:9322/\": dial tcp 10.217.0.168:9322: connect: connection refused" Feb 16 10:07:55 crc kubenswrapper[4814]: I0216 10:07:55.756153 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.168:9322/\": dial tcp 10.217.0.168:9322: connect: connection refused" Feb 16 10:07:55 crc kubenswrapper[4814]: I0216 10:07:55.837123 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 10:07:55 crc kubenswrapper[4814]: I0216 10:07:55.837171 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 10:07:55 crc kubenswrapper[4814]: I0216 10:07:55.869685 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 16 10:07:56 crc kubenswrapper[4814]: I0216 10:07:56.849893 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 10:07:56 crc kubenswrapper[4814]: I0216 10:07:56.849927 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.452851 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-47nvw" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.467636 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.537911 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5911fde8-d13a-4c6a-941e-e25515983484-logs\") pod \"5911fde8-d13a-4c6a-941e-e25515983484\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.538126 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-custom-prometheus-ca\") pod \"5911fde8-d13a-4c6a-941e-e25515983484\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.538174 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-combined-ca-bundle\") pod \"e57a813a-2457-4800-8eef-a91c409659f3\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.538258 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-config-data\") pod \"5911fde8-d13a-4c6a-941e-e25515983484\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.538337 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sxrx\" (UniqueName: \"kubernetes.io/projected/5911fde8-d13a-4c6a-941e-e25515983484-kube-api-access-9sxrx\") pod \"5911fde8-d13a-4c6a-941e-e25515983484\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.538370 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-combined-ca-bundle\") pod \"5911fde8-d13a-4c6a-941e-e25515983484\" (UID: \"5911fde8-d13a-4c6a-941e-e25515983484\") " Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.538463 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-db-sync-config-data\") pod \"e57a813a-2457-4800-8eef-a91c409659f3\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.538599 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsm47\" (UniqueName: \"kubernetes.io/projected/e57a813a-2457-4800-8eef-a91c409659f3-kube-api-access-xsm47\") pod \"e57a813a-2457-4800-8eef-a91c409659f3\" (UID: \"e57a813a-2457-4800-8eef-a91c409659f3\") " Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.538760 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5911fde8-d13a-4c6a-941e-e25515983484-logs" (OuterVolumeSpecName: "logs") pod "5911fde8-d13a-4c6a-941e-e25515983484" (UID: "5911fde8-d13a-4c6a-941e-e25515983484"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.539265 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5911fde8-d13a-4c6a-941e-e25515983484-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.560861 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5911fde8-d13a-4c6a-941e-e25515983484-kube-api-access-9sxrx" (OuterVolumeSpecName: "kube-api-access-9sxrx") pod "5911fde8-d13a-4c6a-941e-e25515983484" (UID: "5911fde8-d13a-4c6a-941e-e25515983484"). InnerVolumeSpecName "kube-api-access-9sxrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.571817 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e57a813a-2457-4800-8eef-a91c409659f3" (UID: "e57a813a-2457-4800-8eef-a91c409659f3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.584407 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57a813a-2457-4800-8eef-a91c409659f3-kube-api-access-xsm47" (OuterVolumeSpecName: "kube-api-access-xsm47") pod "e57a813a-2457-4800-8eef-a91c409659f3" (UID: "e57a813a-2457-4800-8eef-a91c409659f3"). InnerVolumeSpecName "kube-api-access-xsm47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.602794 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e57a813a-2457-4800-8eef-a91c409659f3" (UID: "e57a813a-2457-4800-8eef-a91c409659f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.622653 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5911fde8-d13a-4c6a-941e-e25515983484" (UID: "5911fde8-d13a-4c6a-941e-e25515983484"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.643115 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.645841 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.645885 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.645898 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sxrx\" (UniqueName: \"kubernetes.io/projected/5911fde8-d13a-4c6a-941e-e25515983484-kube-api-access-9sxrx\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.645914 4814 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e57a813a-2457-4800-8eef-a91c409659f3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.646896 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsm47\" (UniqueName: \"kubernetes.io/projected/e57a813a-2457-4800-8eef-a91c409659f3-kube-api-access-xsm47\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.651029 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.681737 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5911fde8-d13a-4c6a-941e-e25515983484" (UID: "5911fde8-d13a-4c6a-941e-e25515983484"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.701838 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-config-data" (OuterVolumeSpecName: "config-data") pod "5911fde8-d13a-4c6a-941e-e25515983484" (UID: "5911fde8-d13a-4c6a-941e-e25515983484"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.750578 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.750637 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5911fde8-d13a-4c6a-941e-e25515983484-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.871728 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-47nvw" event={"ID":"e57a813a-2457-4800-8eef-a91c409659f3","Type":"ContainerDied","Data":"95a21e7d95c2fc800660ac153f74fa6b9fb973284d9edea9714ecf68dbf89403"} Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.871783 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95a21e7d95c2fc800660ac153f74fa6b9fb973284d9edea9714ecf68dbf89403" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.871862 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-47nvw" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.891486 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.891641 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"5911fde8-d13a-4c6a-941e-e25515983484","Type":"ContainerDied","Data":"c4773f8ad50cd990499809229249dabc0148a335030457a2c82413f5c670ca47"} Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.891721 4814 scope.go:117] "RemoveContainer" containerID="6a2a36eea931c3dfde895d8e48f7b2c745464991dcf338c3f79d2fbc6e89c232" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.959426 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.970993 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.978568 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:07:57 crc kubenswrapper[4814]: E0216 10:07:57.979367 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.979392 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api" Feb 16 10:07:57 crc kubenswrapper[4814]: E0216 10:07:57.979413 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api-log" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.979420 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api-log" Feb 16 10:07:57 crc kubenswrapper[4814]: E0216 10:07:57.979444 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57a813a-2457-4800-8eef-a91c409659f3" containerName="barbican-db-sync" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.979450 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57a813a-2457-4800-8eef-a91c409659f3" containerName="barbican-db-sync" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.979660 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api-log" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.979676 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="5911fde8-d13a-4c6a-941e-e25515983484" containerName="watcher-api" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.979701 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57a813a-2457-4800-8eef-a91c409659f3" containerName="barbican-db-sync" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.980924 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.984148 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.985889 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 16 10:07:57 crc kubenswrapper[4814]: I0216 10:07:57.988785 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.023929 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.167974 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.168181 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20340e82-7a4f-4828-affb-85843eca8f6c-logs\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.168225 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctkcm\" (UniqueName: \"kubernetes.io/projected/20340e82-7a4f-4828-affb-85843eca8f6c-kube-api-access-ctkcm\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.168291 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.168348 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-public-tls-certs\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.168376 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.168442 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-config-data\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.271481 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-public-tls-certs\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.271583 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.271630 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-config-data\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.271830 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.271945 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20340e82-7a4f-4828-affb-85843eca8f6c-logs\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.272011 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctkcm\" (UniqueName: \"kubernetes.io/projected/20340e82-7a4f-4828-affb-85843eca8f6c-kube-api-access-ctkcm\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.272049 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.272787 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20340e82-7a4f-4828-affb-85843eca8f6c-logs\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.276465 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.282233 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-public-tls-certs\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.298395 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.302491 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.302593 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20340e82-7a4f-4828-affb-85843eca8f6c-config-data\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.307870 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctkcm\" (UniqueName: \"kubernetes.io/projected/20340e82-7a4f-4828-affb-85843eca8f6c-kube-api-access-ctkcm\") pod \"watcher-api-0\" (UID: \"20340e82-7a4f-4828-affb-85843eca8f6c\") " pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.608638 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.619472 4814 scope.go:117] "RemoveContainer" containerID="a1da293499103e696851c13f46701b761792f85dbe30cea54308a5edbc062afc" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.897108 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5db6f4b556-hsqhl"] Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.914843 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.927152 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-vsf7q" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.932291 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.932621 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.936778 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5db6f4b556-hsqhl"] Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.975966 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5cb9bb875f-gkglk"] Feb 16 10:07:58 crc kubenswrapper[4814]: I0216 10:07:58.995038 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.001833 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.104261 4814 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod2f2ed9b7-2884-4466-a5b3-f09640444423"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod2f2ed9b7-2884-4466-a5b3-f09640444423] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2f2ed9b7_2884_4466_a5b3_f09640444423.slice" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.107765 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-config-data-custom\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.107916 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-logs\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.108056 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-combined-ca-bundle\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.108135 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lqg8\" (UniqueName: \"kubernetes.io/projected/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-kube-api-access-4lqg8\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.108212 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-config-data-custom\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.108338 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-config-data\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.108391 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vn87\" (UniqueName: \"kubernetes.io/projected/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-kube-api-access-8vn87\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.108637 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-combined-ca-bundle\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.108798 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-logs\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.108869 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-config-data\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.171524 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5911fde8-d13a-4c6a-941e-e25515983484" path="/var/lib/kubelet/pods/5911fde8-d13a-4c6a-941e-e25515983484/volumes" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.172928 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cb9bb875f-gkglk"] Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.172965 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-664c9d964f-bhzdp"] Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.183267 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.216404 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-combined-ca-bundle\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.216600 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-logs\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.216657 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-config-data\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.216779 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-config-data-custom\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.216844 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-logs\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.216919 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-combined-ca-bundle\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.216973 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lqg8\" (UniqueName: \"kubernetes.io/projected/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-kube-api-access-4lqg8\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.217111 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-config-data-custom\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.217223 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-config-data\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.217271 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vn87\" (UniqueName: \"kubernetes.io/projected/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-kube-api-access-8vn87\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.218523 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-logs\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.228226 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-logs\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.248773 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-664c9d964f-bhzdp"] Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.251513 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-combined-ca-bundle\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.266933 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-config-data-custom\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.268799 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-config-data-custom\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.269368 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-config-data\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.270570 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-config-data\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.272621 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-combined-ca-bundle\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.293960 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lqg8\" (UniqueName: \"kubernetes.io/projected/f1ffe164-e3ac-43be-bd5a-c3c0aa75930a-kube-api-access-4lqg8\") pod \"barbican-keystone-listener-5db6f4b556-hsqhl\" (UID: \"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a\") " pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.315370 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-865d97dbf4-rmb8f"] Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.319213 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-swift-storage-0\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.319269 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-nb\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.319292 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk2vw\" (UniqueName: \"kubernetes.io/projected/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-kube-api-access-vk2vw\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.320354 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-sb\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.320455 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-config\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.320734 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-svc\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.321412 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.328022 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.339551 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vn87\" (UniqueName: \"kubernetes.io/projected/7ee17c93-aa03-460b-a8ca-9fbc19b6a23f-kube-api-access-8vn87\") pod \"barbican-worker-5cb9bb875f-gkglk\" (UID: \"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f\") " pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.399079 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-865d97dbf4-rmb8f"] Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.420607 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.423340 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data-custom\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.423385 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-combined-ca-bundle\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.423495 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.423680 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-config\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.423740 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/917e665e-dc47-4b2d-9f9f-32896670a6f6-logs\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.423767 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-svc\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.423856 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-swift-storage-0\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.423965 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-nb\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.423994 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk2vw\" (UniqueName: \"kubernetes.io/projected/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-kube-api-access-vk2vw\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.424024 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snm7q\" (UniqueName: \"kubernetes.io/projected/917e665e-dc47-4b2d-9f9f-32896670a6f6-kube-api-access-snm7q\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.424162 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-sb\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.425526 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-sb\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.426670 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-config\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: E0216 10:07:59.426703 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.427443 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cb9bb875f-gkglk" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.427658 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-swift-storage-0\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.428039 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-svc\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.428278 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-nb\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.456524 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.456701 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.456907 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk2vw\" (UniqueName: \"kubernetes.io/projected/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-kube-api-access-vk2vw\") pod \"dnsmasq-dns-664c9d964f-bhzdp\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.527179 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data-custom\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.527616 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-combined-ca-bundle\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.527660 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.527709 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/917e665e-dc47-4b2d-9f9f-32896670a6f6-logs\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.527841 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snm7q\" (UniqueName: \"kubernetes.io/projected/917e665e-dc47-4b2d-9f9f-32896670a6f6-kube-api-access-snm7q\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.544919 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/917e665e-dc47-4b2d-9f9f-32896670a6f6-logs\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.546483 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-combined-ca-bundle\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.563378 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.565330 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data-custom\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.612131 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snm7q\" (UniqueName: \"kubernetes.io/projected/917e665e-dc47-4b2d-9f9f-32896670a6f6-kube-api-access-snm7q\") pod \"barbican-api-865d97dbf4-rmb8f\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.674142 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.711717 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:07:59 crc kubenswrapper[4814]: I0216 10:07:59.868575 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.095437 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"20340e82-7a4f-4828-affb-85843eca8f6c","Type":"ContainerStarted","Data":"a33469eb7ddbdccb69b5f077b73c22b54840fe1bf7c26d8109b6505d722962f8"} Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.102499 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c86bfb21-74b9-406a-ae57-635d5ee7e5fd","Type":"ContainerStarted","Data":"d82476c9f83b96957c0975e4b813ef4bbdb8c27466e34070e2e229705ca3c13f"} Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.102729 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="ceilometer-notification-agent" containerID="cri-o://4751518f5963e87649e42f66d9c69d624f098c833c7af42d58ae4abb09a66803" gracePeriod=30 Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.102841 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.102991 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="sg-core" containerID="cri-o://00e8670591c5639d8864385e69b9b98adfd6563750769cb0d2dc36b57a4eda07" gracePeriod=30 Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.103194 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="proxy-httpd" containerID="cri-o://d82476c9f83b96957c0975e4b813ef4bbdb8c27466e34070e2e229705ca3c13f" gracePeriod=30 Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.139805 4814 generic.go:334] "Generic (PLEG): container finished" podID="c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" containerID="e07ce1328e93a7aaf324f7ec49fc13b10732c980b355e901bff56ff144383dd7" exitCode=0 Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.140207 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p4vk6" event={"ID":"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4","Type":"ContainerDied","Data":"e07ce1328e93a7aaf324f7ec49fc13b10732c980b355e901bff56ff144383dd7"} Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.150688 4814 generic.go:334] "Generic (PLEG): container finished" podID="a7a61dcc-b9cc-4c92-b242-a4af907a0137" containerID="2bcd2eef0544b96319d58c16f6d66d501f48eb0514d788fedc88e1d79bf58d11" exitCode=137 Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.150731 4814 generic.go:334] "Generic (PLEG): container finished" podID="a7a61dcc-b9cc-4c92-b242-a4af907a0137" containerID="9193eee49c8d438afe4072cfc74fbac4288ac5a657c91a0b27084b4ead631a6c" exitCode=137 Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.150790 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f55894665-vd6fz" event={"ID":"a7a61dcc-b9cc-4c92-b242-a4af907a0137","Type":"ContainerDied","Data":"2bcd2eef0544b96319d58c16f6d66d501f48eb0514d788fedc88e1d79bf58d11"} Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.150821 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f55894665-vd6fz" event={"ID":"a7a61dcc-b9cc-4c92-b242-a4af907a0137","Type":"ContainerDied","Data":"9193eee49c8d438afe4072cfc74fbac4288ac5a657c91a0b27084b4ead631a6c"} Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.152654 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5db6f4b556-hsqhl"] Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.162487 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"88895e94-c6c9-4622-b6eb-94982898ac2b","Type":"ContainerStarted","Data":"3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9"} Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.202055 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.306421 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cb9bb875f-gkglk"] Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.360501 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.880557 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-664c9d964f-bhzdp"] Feb 16 10:08:00 crc kubenswrapper[4814]: I0216 10:08:00.929775 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.017642 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a61dcc-b9cc-4c92-b242-a4af907a0137-logs\") pod \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.017750 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a7a61dcc-b9cc-4c92-b242-a4af907a0137-horizon-secret-key\") pod \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.017793 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-config-data\") pod \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.017885 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-scripts\") pod \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.018003 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wsr6\" (UniqueName: \"kubernetes.io/projected/a7a61dcc-b9cc-4c92-b242-a4af907a0137-kube-api-access-9wsr6\") pod \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\" (UID: \"a7a61dcc-b9cc-4c92-b242-a4af907a0137\") " Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.020322 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a61dcc-b9cc-4c92-b242-a4af907a0137-logs" (OuterVolumeSpecName: "logs") pod "a7a61dcc-b9cc-4c92-b242-a4af907a0137" (UID: "a7a61dcc-b9cc-4c92-b242-a4af907a0137"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.033398 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a61dcc-b9cc-4c92-b242-a4af907a0137-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a7a61dcc-b9cc-4c92-b242-a4af907a0137" (UID: "a7a61dcc-b9cc-4c92-b242-a4af907a0137"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.035785 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a61dcc-b9cc-4c92-b242-a4af907a0137-kube-api-access-9wsr6" (OuterVolumeSpecName: "kube-api-access-9wsr6") pod "a7a61dcc-b9cc-4c92-b242-a4af907a0137" (UID: "a7a61dcc-b9cc-4c92-b242-a4af907a0137"). InnerVolumeSpecName "kube-api-access-9wsr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.057694 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-config-data" (OuterVolumeSpecName: "config-data") pod "a7a61dcc-b9cc-4c92-b242-a4af907a0137" (UID: "a7a61dcc-b9cc-4c92-b242-a4af907a0137"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.118482 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-865d97dbf4-rmb8f"] Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.121101 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7a61dcc-b9cc-4c92-b242-a4af907a0137-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.121137 4814 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a7a61dcc-b9cc-4c92-b242-a4af907a0137-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.121150 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.121162 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wsr6\" (UniqueName: \"kubernetes.io/projected/a7a61dcc-b9cc-4c92-b242-a4af907a0137-kube-api-access-9wsr6\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.122738 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-scripts" (OuterVolumeSpecName: "scripts") pod "a7a61dcc-b9cc-4c92-b242-a4af907a0137" (UID: "a7a61dcc-b9cc-4c92-b242-a4af907a0137"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.225153 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7a61dcc-b9cc-4c92-b242-a4af907a0137-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.225292 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865d97dbf4-rmb8f" event={"ID":"917e665e-dc47-4b2d-9f9f-32896670a6f6","Type":"ContainerStarted","Data":"f3f5d8c9fbedb172f27208aeb62da0921f785fc4b1d6ad99a8f78ab630863b51"} Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.242345 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" event={"ID":"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a","Type":"ContainerStarted","Data":"37edc2752bd76dc333ba56d2361409f3f970b51cec04b295d650eca540b4abfc"} Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.260740 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"20340e82-7a4f-4828-affb-85843eca8f6c","Type":"ContainerStarted","Data":"5f2aa6efb4e62224feb536fd629c8c6d4f5d99f857cff2b9a6ac7e6802b9159a"} Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.310032 4814 generic.go:334] "Generic (PLEG): container finished" podID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerID="d82476c9f83b96957c0975e4b813ef4bbdb8c27466e34070e2e229705ca3c13f" exitCode=0 Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.310091 4814 generic.go:334] "Generic (PLEG): container finished" podID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerID="00e8670591c5639d8864385e69b9b98adfd6563750769cb0d2dc36b57a4eda07" exitCode=2 Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.310233 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c86bfb21-74b9-406a-ae57-635d5ee7e5fd","Type":"ContainerDied","Data":"d82476c9f83b96957c0975e4b813ef4bbdb8c27466e34070e2e229705ca3c13f"} Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.310271 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c86bfb21-74b9-406a-ae57-635d5ee7e5fd","Type":"ContainerDied","Data":"00e8670591c5639d8864385e69b9b98adfd6563750769cb0d2dc36b57a4eda07"} Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.324444 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f55894665-vd6fz" event={"ID":"a7a61dcc-b9cc-4c92-b242-a4af907a0137","Type":"ContainerDied","Data":"3ec2e155fa881adf3c3e1ed9fd66e3af4ad7443ee6683d4db49f6a2c0966c260"} Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.324512 4814 scope.go:117] "RemoveContainer" containerID="2bcd2eef0544b96319d58c16f6d66d501f48eb0514d788fedc88e1d79bf58d11" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.324741 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f55894665-vd6fz" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.338105 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cb9bb875f-gkglk" event={"ID":"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f","Type":"ContainerStarted","Data":"c61e59456f0f8befe479e11a1c225623709559cf3b2e465fb1ec668b541a1390"} Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.351869 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" event={"ID":"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57","Type":"ContainerStarted","Data":"bf11059cb9b06c1eb591faaac1d1269f2ddba44d9665d8a8ff9c88854a5fd4ef"} Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.368363 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.538721 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f55894665-vd6fz"] Feb 16 10:08:01 crc kubenswrapper[4814]: I0216 10:08:01.655963 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f55894665-vd6fz"] Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.069992 4814 scope.go:117] "RemoveContainer" containerID="9193eee49c8d438afe4072cfc74fbac4288ac5a657c91a0b27084b4ead631a6c" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.120019 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.392115 4814 generic.go:334] "Generic (PLEG): container finished" podID="a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" containerID="57124bb85b6df81be01684723336943b8a29fd9b3c2d68c7199a9efabff17d81" exitCode=0 Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.393845 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" event={"ID":"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57","Type":"ContainerDied","Data":"57124bb85b6df81be01684723336943b8a29fd9b3c2d68c7199a9efabff17d81"} Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.399747 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865d97dbf4-rmb8f" event={"ID":"917e665e-dc47-4b2d-9f9f-32896670a6f6","Type":"ContainerStarted","Data":"0dba60d68feeea67b94eaef713ea6b7cd1898b94fcf8eec3d78c3a361bf23851"} Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.406660 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"20340e82-7a4f-4828-affb-85843eca8f6c","Type":"ContainerStarted","Data":"6926d903d893fda68781f278ec56330df61acd2ee7180c5f113d16b2342c66f3"} Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.407786 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.412041 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-p4vk6" event={"ID":"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4","Type":"ContainerDied","Data":"2b98731628f3e95a80a45f7af0aebd075860209975016487cd62ab41fdafd8c9"} Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.412086 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b98731628f3e95a80a45f7af0aebd075860209975016487cd62ab41fdafd8c9" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.412237 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.472324 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.472286979 podStartE2EDuration="5.472286979s" podCreationTimestamp="2026-02-16 10:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:02.455009683 +0000 UTC m=+1340.148165863" watchObservedRunningTime="2026-02-16 10:08:02.472286979 +0000 UTC m=+1340.165443189" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.492365 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-combined-ca-bundle\") pod \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.492457 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-db-sync-config-data\") pod \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.492746 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-scripts\") pod \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.492783 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-config-data\") pod \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.492861 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-etc-machine-id\") pod \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.492940 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75r9b\" (UniqueName: \"kubernetes.io/projected/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-kube-api-access-75r9b\") pod \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\" (UID: \"c89e3cee-9acb-4b29-ab9a-ad50616aa9d4\") " Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.494852 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" (UID: "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.514941 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-kube-api-access-75r9b" (OuterVolumeSpecName: "kube-api-access-75r9b") pod "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" (UID: "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4"). InnerVolumeSpecName "kube-api-access-75r9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.515043 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" (UID: "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.519788 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-scripts" (OuterVolumeSpecName: "scripts") pod "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" (UID: "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.596247 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.596381 4814 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.596393 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75r9b\" (UniqueName: \"kubernetes.io/projected/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-kube-api-access-75r9b\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.596402 4814 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.610795 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" (UID: "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.672695 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-config-data" (OuterVolumeSpecName: "config-data") pod "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" (UID: "c89e3cee-9acb-4b29-ab9a-ad50616aa9d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.698350 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:02 crc kubenswrapper[4814]: I0216 10:08:02.698394 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.029190 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a61dcc-b9cc-4c92-b242-a4af907a0137" path="/var/lib/kubelet/pods/a7a61dcc-b9cc-4c92-b242-a4af907a0137/volumes" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.113926 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-fc9647d64-z5jk2"] Feb 16 10:08:03 crc kubenswrapper[4814]: E0216 10:08:03.114783 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a61dcc-b9cc-4c92-b242-a4af907a0137" containerName="horizon-log" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.114815 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a61dcc-b9cc-4c92-b242-a4af907a0137" containerName="horizon-log" Feb 16 10:08:03 crc kubenswrapper[4814]: E0216 10:08:03.114855 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a61dcc-b9cc-4c92-b242-a4af907a0137" containerName="horizon" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.114868 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a61dcc-b9cc-4c92-b242-a4af907a0137" containerName="horizon" Feb 16 10:08:03 crc kubenswrapper[4814]: E0216 10:08:03.114882 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" containerName="cinder-db-sync" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.114892 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" containerName="cinder-db-sync" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.115336 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a61dcc-b9cc-4c92-b242-a4af907a0137" containerName="horizon" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.115413 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a61dcc-b9cc-4c92-b242-a4af907a0137" containerName="horizon-log" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.115434 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" containerName="cinder-db-sync" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.117438 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.128488 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.134199 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.151504 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fc9647d64-z5jk2"] Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.238203 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-config-data\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.238298 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-config-data-custom\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.238323 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-public-tls-certs\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.238489 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f3dfade-0392-451d-85d6-cf886a408bb4-logs\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.238530 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-internal-tls-certs\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.238602 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-combined-ca-bundle\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.238633 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scb6v\" (UniqueName: \"kubernetes.io/projected/3f3dfade-0392-451d-85d6-cf886a408bb4-kube-api-access-scb6v\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.339846 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f3dfade-0392-451d-85d6-cf886a408bb4-logs\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.341488 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-internal-tls-certs\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.341674 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-combined-ca-bundle\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.341710 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scb6v\" (UniqueName: \"kubernetes.io/projected/3f3dfade-0392-451d-85d6-cf886a408bb4-kube-api-access-scb6v\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.341405 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f3dfade-0392-451d-85d6-cf886a408bb4-logs\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.342862 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-config-data\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.342917 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-config-data-custom\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.342946 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-public-tls-certs\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.440980 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-combined-ca-bundle\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.441128 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-internal-tls-certs\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.442545 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865d97dbf4-rmb8f" event={"ID":"917e665e-dc47-4b2d-9f9f-32896670a6f6","Type":"ContainerStarted","Data":"6a02836f22987781418bb24f46eb67e0cbe4a52bf4842cb5d70ddaa7b7e84213"} Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.442630 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-p4vk6" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.442707 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.442724 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.445509 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-public-tls-certs\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.446512 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-config-data\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.448594 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f3dfade-0392-451d-85d6-cf886a408bb4-config-data-custom\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.451632 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scb6v\" (UniqueName: \"kubernetes.io/projected/3f3dfade-0392-451d-85d6-cf886a408bb4-kube-api-access-scb6v\") pod \"barbican-api-fc9647d64-z5jk2\" (UID: \"3f3dfade-0392-451d-85d6-cf886a408bb4\") " pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.478131 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.511123 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-865d97dbf4-rmb8f" podStartSLOduration=4.511098231 podStartE2EDuration="4.511098231s" podCreationTimestamp="2026-02-16 10:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:03.479870614 +0000 UTC m=+1341.173026794" watchObservedRunningTime="2026-02-16 10:08:03.511098231 +0000 UTC m=+1341.204254411" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.609821 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.813380 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.822607 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.829228 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.829634 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.829800 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.830077 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jj8w9" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.857619 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.975842 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lhxg\" (UniqueName: \"kubernetes.io/projected/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-kube-api-access-6lhxg\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.975964 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.976208 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.976280 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.986464 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:03 crc kubenswrapper[4814]: I0216 10:08:03.986574 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.058155 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-664c9d964f-bhzdp"] Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.078104 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7586bc8799-4lnds"] Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.080238 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.088478 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.088592 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.088616 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.088672 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.088697 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.088762 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lhxg\" (UniqueName: \"kubernetes.io/projected/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-kube-api-access-6lhxg\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.089094 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.089677 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7586bc8799-4lnds"] Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.101795 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.102886 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.109415 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-scripts\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.118029 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.137915 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lhxg\" (UniqueName: \"kubernetes.io/projected/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-kube-api-access-6lhxg\") pod \"cinder-scheduler-0\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.192083 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvkn4\" (UniqueName: \"kubernetes.io/projected/ab7becc0-0d83-425b-9329-e3ddbedd82cd-kube-api-access-cvkn4\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.192210 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-swift-storage-0\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.192309 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-sb\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.192377 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-config\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.192522 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-nb\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.192820 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-svc\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.218293 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.295577 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-svc\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.295740 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvkn4\" (UniqueName: \"kubernetes.io/projected/ab7becc0-0d83-425b-9329-e3ddbedd82cd-kube-api-access-cvkn4\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.295779 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-swift-storage-0\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.295815 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-sb\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.295857 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-config\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.295925 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-nb\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.297141 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-nb\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.297927 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-svc\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.300337 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-config\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.300380 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-swift-storage-0\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.300747 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-sb\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.304301 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.316729 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.320896 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.330409 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.334963 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvkn4\" (UniqueName: \"kubernetes.io/projected/ab7becc0-0d83-425b-9329-e3ddbedd82cd-kube-api-access-cvkn4\") pod \"dnsmasq-dns-7586bc8799-4lnds\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.472731 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.474099 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.509042 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.509258 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x62sg\" (UniqueName: \"kubernetes.io/projected/7eb35cac-f0f0-45f9-8f80-63832e254210-kube-api-access-x62sg\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.509408 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7eb35cac-f0f0-45f9-8f80-63832e254210-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.509513 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eb35cac-f0f0-45f9-8f80-63832e254210-logs\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.509571 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-scripts\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.509687 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data-custom\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.509894 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.538307 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.612370 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7eb35cac-f0f0-45f9-8f80-63832e254210-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.612518 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eb35cac-f0f0-45f9-8f80-63832e254210-logs\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.612592 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-scripts\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.612673 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data-custom\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.612814 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.612925 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.613049 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x62sg\" (UniqueName: \"kubernetes.io/projected/7eb35cac-f0f0-45f9-8f80-63832e254210-kube-api-access-x62sg\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.614552 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7eb35cac-f0f0-45f9-8f80-63832e254210-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.615842 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eb35cac-f0f0-45f9-8f80-63832e254210-logs\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.622127 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-scripts\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.623786 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.628290 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data-custom\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.636791 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.638122 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x62sg\" (UniqueName: \"kubernetes.io/projected/7eb35cac-f0f0-45f9-8f80-63832e254210-kube-api-access-x62sg\") pod \"cinder-api-0\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " pod="openstack/cinder-api-0" Feb 16 10:08:04 crc kubenswrapper[4814]: I0216 10:08:04.721004 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 10:08:05 crc kubenswrapper[4814]: I0216 10:08:05.503423 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:05 crc kubenswrapper[4814]: E0216 10:08:05.507831 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9 is running failed: container process not found" containerID="3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 16 10:08:05 crc kubenswrapper[4814]: E0216 10:08:05.508732 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9 is running failed: container process not found" containerID="3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 16 10:08:05 crc kubenswrapper[4814]: E0216 10:08:05.509180 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9 is running failed: container process not found" containerID="3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 16 10:08:05 crc kubenswrapper[4814]: E0216 10:08:05.509226 4814 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9 is running failed: container process not found" probeType="Startup" pod="openstack/watcher-decision-engine-0" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:08:05 crc kubenswrapper[4814]: I0216 10:08:05.517742 4814 generic.go:334] "Generic (PLEG): container finished" podID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerID="4751518f5963e87649e42f66d9c69d624f098c833c7af42d58ae4abb09a66803" exitCode=0 Feb 16 10:08:05 crc kubenswrapper[4814]: I0216 10:08:05.517829 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c86bfb21-74b9-406a-ae57-635d5ee7e5fd","Type":"ContainerDied","Data":"4751518f5963e87649e42f66d9c69d624f098c833c7af42d58ae4abb09a66803"} Feb 16 10:08:05 crc kubenswrapper[4814]: I0216 10:08:05.520760 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-76696f58b-dfzph" Feb 16 10:08:05 crc kubenswrapper[4814]: I0216 10:08:05.527450 4814 generic.go:334] "Generic (PLEG): container finished" podID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerID="3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9" exitCode=1 Feb 16 10:08:05 crc kubenswrapper[4814]: I0216 10:08:05.527492 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"88895e94-c6c9-4622-b6eb-94982898ac2b","Type":"ContainerDied","Data":"3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9"} Feb 16 10:08:05 crc kubenswrapper[4814]: I0216 10:08:05.527527 4814 scope.go:117] "RemoveContainer" containerID="d54f086af5398528505ba9cdf4e062ebd7895f53fc59e3268c49fa6186db035f" Feb 16 10:08:05 crc kubenswrapper[4814]: I0216 10:08:05.528192 4814 scope.go:117] "RemoveContainer" containerID="3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9" Feb 16 10:08:05 crc kubenswrapper[4814]: E0216 10:08:05.528511 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(88895e94-c6c9-4622-b6eb-94982898ac2b)\"" pod="openstack/watcher-decision-engine-0" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" Feb 16 10:08:05 crc kubenswrapper[4814]: I0216 10:08:05.627085 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f95b74b5b-mpwlg"] Feb 16 10:08:05 crc kubenswrapper[4814]: I0216 10:08:05.627302 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f95b74b5b-mpwlg" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon-log" containerID="cri-o://a3b7fbb9342bb2d0a8a281033cd00be9db237d5e9833458a68dc92e72f9e66ca" gracePeriod=30 Feb 16 10:08:05 crc kubenswrapper[4814]: I0216 10:08:05.627699 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f95b74b5b-mpwlg" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon" containerID="cri-o://87871cb885e9b3909b68ea04065f4d9407f5cf1aba6d478b7386c5f4876768fd" gracePeriod=30 Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.443569 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.675134 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.715661 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-combined-ca-bundle\") pod \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.716201 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-run-httpd\") pod \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.716326 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-scripts\") pod \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.716417 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-sg-core-conf-yaml\") pod \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.716447 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-config-data\") pod \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.716560 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8cs4\" (UniqueName: \"kubernetes.io/projected/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-kube-api-access-k8cs4\") pod \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.716595 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-log-httpd\") pod \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\" (UID: \"c86bfb21-74b9-406a-ae57-635d5ee7e5fd\") " Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.725174 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c86bfb21-74b9-406a-ae57-635d5ee7e5fd" (UID: "c86bfb21-74b9-406a-ae57-635d5ee7e5fd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.725649 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c86bfb21-74b9-406a-ae57-635d5ee7e5fd" (UID: "c86bfb21-74b9-406a-ae57-635d5ee7e5fd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.726735 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.726779 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.747693 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-kube-api-access-k8cs4" (OuterVolumeSpecName: "kube-api-access-k8cs4") pod "c86bfb21-74b9-406a-ae57-635d5ee7e5fd" (UID: "c86bfb21-74b9-406a-ae57-635d5ee7e5fd"). InnerVolumeSpecName "kube-api-access-k8cs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.747871 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-scripts" (OuterVolumeSpecName: "scripts") pod "c86bfb21-74b9-406a-ae57-635d5ee7e5fd" (UID: "c86bfb21-74b9-406a-ae57-635d5ee7e5fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.838905 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.838959 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8cs4\" (UniqueName: \"kubernetes.io/projected/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-kube-api-access-k8cs4\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.859272 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c86bfb21-74b9-406a-ae57-635d5ee7e5fd" (UID: "c86bfb21-74b9-406a-ae57-635d5ee7e5fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:06 crc kubenswrapper[4814]: W0216 10:08:06.868849 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1eb2dbf_6a0d_46b4_b470_dffaa69f510f.slice/crio-e9eaf448867b99c298de7b5ab3b408b52b45fdcd683e6c473cf3b55210a2c9de WatchSource:0}: Error finding container e9eaf448867b99c298de7b5ab3b408b52b45fdcd683e6c473cf3b55210a2c9de: Status 404 returned error can't find the container with id e9eaf448867b99c298de7b5ab3b408b52b45fdcd683e6c473cf3b55210a2c9de Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.887804 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c86bfb21-74b9-406a-ae57-635d5ee7e5fd" (UID: "c86bfb21-74b9-406a-ae57-635d5ee7e5fd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.907432 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.948255 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.948282 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:06 crc kubenswrapper[4814]: I0216 10:08:06.968780 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-config-data" (OuterVolumeSpecName: "config-data") pod "c86bfb21-74b9-406a-ae57-635d5ee7e5fd" (UID: "c86bfb21-74b9-406a-ae57-635d5ee7e5fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.050299 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fc9647d64-z5jk2"] Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.050745 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c86bfb21-74b9-406a-ae57-635d5ee7e5fd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.169288 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7586bc8799-4lnds"] Feb 16 10:08:07 crc kubenswrapper[4814]: W0216 10:08:07.210758 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f3dfade_0392_451d_85d6_cf886a408bb4.slice/crio-00b6fa766f86427c349d1529f6285aea25808558da5a918053fabce1e86c1c5f WatchSource:0}: Error finding container 00b6fa766f86427c349d1529f6285aea25808558da5a918053fabce1e86c1c5f: Status 404 returned error can't find the container with id 00b6fa766f86427c349d1529f6285aea25808558da5a918053fabce1e86c1c5f Feb 16 10:08:07 crc kubenswrapper[4814]: W0216 10:08:07.254403 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab7becc0_0d83_425b_9329_e3ddbedd82cd.slice/crio-7d1e9df1dcf812ce4b39b71294fa2d97a94881d78f71d8696cb580d59ae7c4fe WatchSource:0}: Error finding container 7d1e9df1dcf812ce4b39b71294fa2d97a94881d78f71d8696cb580d59ae7c4fe: Status 404 returned error can't find the container with id 7d1e9df1dcf812ce4b39b71294fa2d97a94881d78f71d8696cb580d59ae7c4fe Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.308672 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.376056 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f95b74b5b-mpwlg" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.414019 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="20340e82-7a4f-4828-affb-85843eca8f6c" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.177:9322/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.428345 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.609150 4814 generic.go:334] "Generic (PLEG): container finished" podID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerID="87871cb885e9b3909b68ea04065f4d9407f5cf1aba6d478b7386c5f4876768fd" exitCode=0 Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.609271 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f95b74b5b-mpwlg" event={"ID":"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40","Type":"ContainerDied","Data":"87871cb885e9b3909b68ea04065f4d9407f5cf1aba6d478b7386c5f4876768fd"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.614361 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cb9bb875f-gkglk" event={"ID":"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f","Type":"ContainerStarted","Data":"ea70496e5e59dc282cecf1db5faed6d9bbabc712de7c1a3e62a59bc4ed3f3a27"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.614413 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cb9bb875f-gkglk" event={"ID":"7ee17c93-aa03-460b-a8ca-9fbc19b6a23f","Type":"ContainerStarted","Data":"3381d1408cda85d324b133197a6bc22a346bebfe50ff565a63f06cbff8a0353f"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.625274 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" event={"ID":"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57","Type":"ContainerStarted","Data":"82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.625311 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" podUID="a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" containerName="dnsmasq-dns" containerID="cri-o://82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa" gracePeriod=10 Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.625383 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.643159 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f","Type":"ContainerStarted","Data":"e9eaf448867b99c298de7b5ab3b408b52b45fdcd683e6c473cf3b55210a2c9de"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.655024 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7eb35cac-f0f0-45f9-8f80-63832e254210","Type":"ContainerStarted","Data":"b0b08f0da6884f381032a2a68e71b6bd3d3c38f863f498c7469e88a8c2101f88"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.658775 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" event={"ID":"ab7becc0-0d83-425b-9329-e3ddbedd82cd","Type":"ContainerStarted","Data":"7d1e9df1dcf812ce4b39b71294fa2d97a94881d78f71d8696cb580d59ae7c4fe"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.667032 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fc9647d64-z5jk2" event={"ID":"3f3dfade-0392-451d-85d6-cf886a408bb4","Type":"ContainerStarted","Data":"e6b2f56374e07578e0c9c87fe11c5345d90523351cb77df6101a8490f43b73b6"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.667150 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fc9647d64-z5jk2" event={"ID":"3f3dfade-0392-451d-85d6-cf886a408bb4","Type":"ContainerStarted","Data":"00b6fa766f86427c349d1529f6285aea25808558da5a918053fabce1e86c1c5f"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.670750 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5cb9bb875f-gkglk" podStartSLOduration=3.867150168 podStartE2EDuration="9.670712235s" podCreationTimestamp="2026-02-16 10:07:58 +0000 UTC" firstStartedPulling="2026-02-16 10:08:00.340627651 +0000 UTC m=+1338.033783831" lastFinishedPulling="2026-02-16 10:08:06.144189718 +0000 UTC m=+1343.837345898" observedRunningTime="2026-02-16 10:08:07.641200354 +0000 UTC m=+1345.334356554" watchObservedRunningTime="2026-02-16 10:08:07.670712235 +0000 UTC m=+1345.363868415" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.673438 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" event={"ID":"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a","Type":"ContainerStarted","Data":"8801af83cb76d5fb8edef3684c06904d461f04ea0289608d6d87f2c7df75619b"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.673477 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" event={"ID":"f1ffe164-e3ac-43be-bd5a-c3c0aa75930a","Type":"ContainerStarted","Data":"24e2496424937fd7a6c3805abf83ea2f1a628ced652b22e49a89c882ca08dce1"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.688722 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c86bfb21-74b9-406a-ae57-635d5ee7e5fd","Type":"ContainerDied","Data":"6babd313f598b38921defef809aea84aea6b75959b19dee743bd0e9219bd5d1e"} Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.688784 4814 scope.go:117] "RemoveContainer" containerID="d82476c9f83b96957c0975e4b813ef4bbdb8c27466e34070e2e229705ca3c13f" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.688937 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.695281 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" podStartSLOduration=9.695258788 podStartE2EDuration="9.695258788s" podCreationTimestamp="2026-02-16 10:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:07.668136214 +0000 UTC m=+1345.361292394" watchObservedRunningTime="2026-02-16 10:08:07.695258788 +0000 UTC m=+1345.388414968" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.722382 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5db6f4b556-hsqhl" podStartSLOduration=3.765806275 podStartE2EDuration="9.722357882s" podCreationTimestamp="2026-02-16 10:07:58 +0000 UTC" firstStartedPulling="2026-02-16 10:08:00.201563741 +0000 UTC m=+1337.894719931" lastFinishedPulling="2026-02-16 10:08:06.158115358 +0000 UTC m=+1343.851271538" observedRunningTime="2026-02-16 10:08:07.697076315 +0000 UTC m=+1345.390232505" watchObservedRunningTime="2026-02-16 10:08:07.722357882 +0000 UTC m=+1345.415514072" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.815481 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.827406 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.845726 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:07 crc kubenswrapper[4814]: E0216 10:08:07.846629 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="sg-core" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.846662 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="sg-core" Feb 16 10:08:07 crc kubenswrapper[4814]: E0216 10:08:07.846701 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="proxy-httpd" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.846712 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="proxy-httpd" Feb 16 10:08:07 crc kubenswrapper[4814]: E0216 10:08:07.846742 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="ceilometer-notification-agent" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.846751 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="ceilometer-notification-agent" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.847003 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="proxy-httpd" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.847018 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="ceilometer-notification-agent" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.847043 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" containerName="sg-core" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.855723 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.858310 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.861507 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.861972 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 10:08:07 crc kubenswrapper[4814]: I0216 10:08:07.871767 4814 scope.go:117] "RemoveContainer" containerID="00e8670591c5639d8864385e69b9b98adfd6563750769cb0d2dc36b57a4eda07" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.042130 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-log-httpd\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.042803 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxzzt\" (UniqueName: \"kubernetes.io/projected/09e4bc37-1b9c-447e-93e3-b1278ca4d959-kube-api-access-xxzzt\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.042927 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.042999 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.043401 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-run-httpd\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.043508 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-scripts\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.043604 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-config-data\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.100789 4814 scope.go:117] "RemoveContainer" containerID="4751518f5963e87649e42f66d9c69d624f098c833c7af42d58ae4abb09a66803" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.147414 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-run-httpd\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.147519 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-scripts\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.147574 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-config-data\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.147609 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-log-httpd\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.147697 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxzzt\" (UniqueName: \"kubernetes.io/projected/09e4bc37-1b9c-447e-93e3-b1278ca4d959-kube-api-access-xxzzt\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.147754 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.147772 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.149173 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-log-httpd\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.150758 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-run-httpd\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.162662 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-scripts\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.162832 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-config-data\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.167732 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.171294 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.174150 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxzzt\" (UniqueName: \"kubernetes.io/projected/09e4bc37-1b9c-447e-93e3-b1278ca4d959-kube-api-access-xxzzt\") pod \"ceilometer-0\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.196489 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.481479 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.610473 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.661286 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-svc\") pod \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.661412 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk2vw\" (UniqueName: \"kubernetes.io/projected/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-kube-api-access-vk2vw\") pod \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.661507 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-sb\") pod \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.661610 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-swift-storage-0\") pod \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.661708 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-nb\") pod \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.661825 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-config\") pod \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\" (UID: \"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57\") " Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.669408 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.718677 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-kube-api-access-vk2vw" (OuterVolumeSpecName: "kube-api-access-vk2vw") pod "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" (UID: "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57"). InnerVolumeSpecName "kube-api-access-vk2vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.764841 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk2vw\" (UniqueName: \"kubernetes.io/projected/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-kube-api-access-vk2vw\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.775966 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" (UID: "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.790378 4814 generic.go:334] "Generic (PLEG): container finished" podID="a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" containerID="82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa" exitCode=0 Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.790480 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" event={"ID":"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57","Type":"ContainerDied","Data":"82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa"} Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.790517 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" event={"ID":"a91aa8f6-4e13-4328-9cc0-a0aa3199bd57","Type":"ContainerDied","Data":"bf11059cb9b06c1eb591faaac1d1269f2ddba44d9665d8a8ff9c88854a5fd4ef"} Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.790556 4814 scope.go:117] "RemoveContainer" containerID="82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.790705 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-664c9d964f-bhzdp" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.825856 4814 generic.go:334] "Generic (PLEG): container finished" podID="ab7becc0-0d83-425b-9329-e3ddbedd82cd" containerID="9491ce0bf4d38fdb5fc211069ff44461ba7995e5f4f23978e0a74cc5a532aa81" exitCode=0 Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.825979 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" event={"ID":"ab7becc0-0d83-425b-9329-e3ddbedd82cd","Type":"ContainerDied","Data":"9491ce0bf4d38fdb5fc211069ff44461ba7995e5f4f23978e0a74cc5a532aa81"} Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.831863 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-config" (OuterVolumeSpecName: "config") pod "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" (UID: "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.867103 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.867170 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.889129 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fc9647d64-z5jk2" event={"ID":"3f3dfade-0392-451d-85d6-cf886a408bb4","Type":"ContainerStarted","Data":"7de6355955f61f022504031f4bc56868235b4138b6ca65638fedd5c851529987"} Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.898422 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" (UID: "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.929268 4814 scope.go:117] "RemoveContainer" containerID="57124bb85b6df81be01684723336943b8a29fd9b3c2d68c7199a9efabff17d81" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.930129 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.972098 4814 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:08 crc kubenswrapper[4814]: I0216 10:08:08.995789 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-fc9647d64-z5jk2" podStartSLOduration=5.99574661 podStartE2EDuration="5.99574661s" podCreationTimestamp="2026-02-16 10:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:08.931887159 +0000 UTC m=+1346.625043339" watchObservedRunningTime="2026-02-16 10:08:08.99574661 +0000 UTC m=+1346.688902790" Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.041372 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c86bfb21-74b9-406a-ae57-635d5ee7e5fd" path="/var/lib/kubelet/pods/c86bfb21-74b9-406a-ae57-635d5ee7e5fd/volumes" Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.072000 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" (UID: "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.075440 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.095471 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" (UID: "a91aa8f6-4e13-4328-9cc0-a0aa3199bd57"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.177780 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.279090 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.434642 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-664c9d964f-bhzdp"] Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.441960 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-664c9d964f-bhzdp"] Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.904099 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09e4bc37-1b9c-447e-93e3-b1278ca4d959","Type":"ContainerStarted","Data":"6f95d7f5f6264c9db8212de704e3f89191a6574b4b75ce731dd0266cc4db6599"} Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.905736 4814 generic.go:334] "Generic (PLEG): container finished" podID="ac000d0d-d120-4828-b60f-3c2e3371dc68" containerID="acbf006fee012b44f22e856025634e5be593954e8ef65de06909047c7cac5cba" exitCode=0 Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.905818 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-chlgm" event={"ID":"ac000d0d-d120-4828-b60f-3c2e3371dc68","Type":"ContainerDied","Data":"acbf006fee012b44f22e856025634e5be593954e8ef65de06909047c7cac5cba"} Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.912858 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f","Type":"ContainerStarted","Data":"1d0320c0740514ef86a76b37845837b8e5e1d45feb3b526d15dbecec7f4b92d8"} Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.913057 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:09 crc kubenswrapper[4814]: I0216 10:08:09.913286 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:10 crc kubenswrapper[4814]: I0216 10:08:10.766495 4814 scope.go:117] "RemoveContainer" containerID="82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa" Feb 16 10:08:10 crc kubenswrapper[4814]: E0216 10:08:10.772491 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa\": container with ID starting with 82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa not found: ID does not exist" containerID="82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa" Feb 16 10:08:10 crc kubenswrapper[4814]: I0216 10:08:10.773749 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa"} err="failed to get container status \"82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa\": rpc error: code = NotFound desc = could not find container \"82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa\": container with ID starting with 82ae5a7f37edcee5d9c7a7efe10ecb1767ab4f32bf0ff1f1e9bb6c7dd6c83afa not found: ID does not exist" Feb 16 10:08:10 crc kubenswrapper[4814]: I0216 10:08:10.773991 4814 scope.go:117] "RemoveContainer" containerID="57124bb85b6df81be01684723336943b8a29fd9b3c2d68c7199a9efabff17d81" Feb 16 10:08:10 crc kubenswrapper[4814]: E0216 10:08:10.784186 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57124bb85b6df81be01684723336943b8a29fd9b3c2d68c7199a9efabff17d81\": container with ID starting with 57124bb85b6df81be01684723336943b8a29fd9b3c2d68c7199a9efabff17d81 not found: ID does not exist" containerID="57124bb85b6df81be01684723336943b8a29fd9b3c2d68c7199a9efabff17d81" Feb 16 10:08:10 crc kubenswrapper[4814]: I0216 10:08:10.784303 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57124bb85b6df81be01684723336943b8a29fd9b3c2d68c7199a9efabff17d81"} err="failed to get container status \"57124bb85b6df81be01684723336943b8a29fd9b3c2d68c7199a9efabff17d81\": rpc error: code = NotFound desc = could not find container \"57124bb85b6df81be01684723336943b8a29fd9b3c2d68c7199a9efabff17d81\": container with ID starting with 57124bb85b6df81be01684723336943b8a29fd9b3c2d68c7199a9efabff17d81 not found: ID does not exist" Feb 16 10:08:10 crc kubenswrapper[4814]: I0216 10:08:10.966774 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7eb35cac-f0f0-45f9-8f80-63832e254210","Type":"ContainerStarted","Data":"ca8c67105dd1aa268f6da838fca8ba9e0800a0c12140e73f9091f74e6cb18d05"} Feb 16 10:08:10 crc kubenswrapper[4814]: I0216 10:08:10.979878 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" event={"ID":"ab7becc0-0d83-425b-9329-e3ddbedd82cd","Type":"ContainerStarted","Data":"2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e"} Feb 16 10:08:10 crc kubenswrapper[4814]: I0216 10:08:10.980330 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.026278 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" path="/var/lib/kubelet/pods/a91aa8f6-4e13-4328-9cc0-a0aa3199bd57/volumes" Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.534953 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-chlgm" Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.558891 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" podStartSLOduration=7.5588668949999995 podStartE2EDuration="7.558866895s" podCreationTimestamp="2026-02-16 10:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:11.014172797 +0000 UTC m=+1348.707328987" watchObservedRunningTime="2026-02-16 10:08:11.558866895 +0000 UTC m=+1349.252023075" Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.586837 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.690716 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-config\") pod \"ac000d0d-d120-4828-b60f-3c2e3371dc68\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.690791 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-combined-ca-bundle\") pod \"ac000d0d-d120-4828-b60f-3c2e3371dc68\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.690978 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmcnh\" (UniqueName: \"kubernetes.io/projected/ac000d0d-d120-4828-b60f-3c2e3371dc68-kube-api-access-nmcnh\") pod \"ac000d0d-d120-4828-b60f-3c2e3371dc68\" (UID: \"ac000d0d-d120-4828-b60f-3c2e3371dc68\") " Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.703829 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac000d0d-d120-4828-b60f-3c2e3371dc68-kube-api-access-nmcnh" (OuterVolumeSpecName: "kube-api-access-nmcnh") pod "ac000d0d-d120-4828-b60f-3c2e3371dc68" (UID: "ac000d0d-d120-4828-b60f-3c2e3371dc68"). InnerVolumeSpecName "kube-api-access-nmcnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.737149 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac000d0d-d120-4828-b60f-3c2e3371dc68" (UID: "ac000d0d-d120-4828-b60f-3c2e3371dc68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.761886 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-config" (OuterVolumeSpecName: "config") pod "ac000d0d-d120-4828-b60f-3c2e3371dc68" (UID: "ac000d0d-d120-4828-b60f-3c2e3371dc68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.800154 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.800201 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac000d0d-d120-4828-b60f-3c2e3371dc68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.800212 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmcnh\" (UniqueName: \"kubernetes.io/projected/ac000d0d-d120-4828-b60f-3c2e3371dc68-kube-api-access-nmcnh\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:11 crc kubenswrapper[4814]: I0216 10:08:11.960200 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.002385 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09e4bc37-1b9c-447e-93e3-b1278ca4d959","Type":"ContainerStarted","Data":"fe9f0292de218db8b924e2937d8a4bec9e47fb1649458f1348d2879e3452fcfa"} Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.002446 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09e4bc37-1b9c-447e-93e3-b1278ca4d959","Type":"ContainerStarted","Data":"805ed1d0288a989e457e87e6544f2d93b475252db5bcc019266755f409c4cfbc"} Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.008788 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-chlgm" event={"ID":"ac000d0d-d120-4828-b60f-3c2e3371dc68","Type":"ContainerDied","Data":"9098f5aa96920b3d3c0016544c3e88a53a5ac435a5bd41c4234779c74418fa43"} Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.008863 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9098f5aa96920b3d3c0016544c3e88a53a5ac435a5bd41c4234779c74418fa43" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.008871 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-chlgm" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.347662 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7586bc8799-4lnds"] Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.444963 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7964bd959-r5xpf"] Feb 16 10:08:12 crc kubenswrapper[4814]: E0216 10:08:12.445418 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" containerName="dnsmasq-dns" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.445450 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" containerName="dnsmasq-dns" Feb 16 10:08:12 crc kubenswrapper[4814]: E0216 10:08:12.445490 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac000d0d-d120-4828-b60f-3c2e3371dc68" containerName="neutron-db-sync" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.445497 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac000d0d-d120-4828-b60f-3c2e3371dc68" containerName="neutron-db-sync" Feb 16 10:08:12 crc kubenswrapper[4814]: E0216 10:08:12.445512 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" containerName="init" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.445518 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" containerName="init" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.445732 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91aa8f6-4e13-4328-9cc0-a0aa3199bd57" containerName="dnsmasq-dns" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.445760 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac000d0d-d120-4828-b60f-3c2e3371dc68" containerName="neutron-db-sync" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.446802 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.475196 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-644c587556-hkrfd"] Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.479009 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.490603 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7964bd959-r5xpf"] Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.492709 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.495439 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.496227 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.496372 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4fj2k" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.545340 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-swift-storage-0\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.545505 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-sb\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.545590 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55xmf\" (UniqueName: \"kubernetes.io/projected/8500ec66-11d7-4826-be1d-0ab947450b54-kube-api-access-55xmf\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.545673 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-config\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.545759 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-nb\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.545804 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-svc\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.612370 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-644c587556-hkrfd"] Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.648252 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55xmf\" (UniqueName: \"kubernetes.io/projected/8500ec66-11d7-4826-be1d-0ab947450b54-kube-api-access-55xmf\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.648352 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-config\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.648413 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-nb\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.648450 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-combined-ca-bundle\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.648491 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-svc\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.648589 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-httpd-config\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.648617 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmx5s\" (UniqueName: \"kubernetes.io/projected/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-kube-api-access-kmx5s\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.648650 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-swift-storage-0\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.648673 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-config\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.648710 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-ovndb-tls-certs\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.648769 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-sb\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.649910 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-sb\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.650866 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-config\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.651423 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-nb\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.655069 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-svc\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.655219 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-swift-storage-0\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.697038 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55xmf\" (UniqueName: \"kubernetes.io/projected/8500ec66-11d7-4826-be1d-0ab947450b54-kube-api-access-55xmf\") pod \"dnsmasq-dns-7964bd959-r5xpf\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.754569 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-combined-ca-bundle\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.761608 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-httpd-config\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.761670 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmx5s\" (UniqueName: \"kubernetes.io/projected/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-kube-api-access-kmx5s\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.761714 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-config\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.761767 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-ovndb-tls-certs\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.782362 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-ovndb-tls-certs\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.784267 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.794915 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-combined-ca-bundle\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.795778 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-httpd-config\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.799435 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmx5s\" (UniqueName: \"kubernetes.io/projected/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-kube-api-access-kmx5s\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.806759 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-config\") pod \"neutron-644c587556-hkrfd\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:12 crc kubenswrapper[4814]: I0216 10:08:12.833257 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:13 crc kubenswrapper[4814]: I0216 10:08:13.102909 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7eb35cac-f0f0-45f9-8f80-63832e254210","Type":"ContainerStarted","Data":"3ef6e6a0f7074704b07ac4baa7e207ec2e0ef30092eb8ba9a5061ebe2406eada"} Feb 16 10:08:13 crc kubenswrapper[4814]: I0216 10:08:13.103464 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7eb35cac-f0f0-45f9-8f80-63832e254210" containerName="cinder-api-log" containerID="cri-o://ca8c67105dd1aa268f6da838fca8ba9e0800a0c12140e73f9091f74e6cb18d05" gracePeriod=30 Feb 16 10:08:13 crc kubenswrapper[4814]: I0216 10:08:13.103627 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7eb35cac-f0f0-45f9-8f80-63832e254210" containerName="cinder-api" containerID="cri-o://3ef6e6a0f7074704b07ac4baa7e207ec2e0ef30092eb8ba9a5061ebe2406eada" gracePeriod=30 Feb 16 10:08:13 crc kubenswrapper[4814]: I0216 10:08:13.103708 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 10:08:13 crc kubenswrapper[4814]: I0216 10:08:13.126964 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09e4bc37-1b9c-447e-93e3-b1278ca4d959","Type":"ContainerStarted","Data":"b1c92bc3b8fdede80860622f35449e9319e34a32dd203031bf7605b4be07f8ed"} Feb 16 10:08:13 crc kubenswrapper[4814]: I0216 10:08:13.137163 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f","Type":"ContainerStarted","Data":"da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e"} Feb 16 10:08:13 crc kubenswrapper[4814]: I0216 10:08:13.258884 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.258858112 podStartE2EDuration="9.258858112s" podCreationTimestamp="2026-02-16 10:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:13.19595581 +0000 UTC m=+1350.889111990" watchObservedRunningTime="2026-02-16 10:08:13.258858112 +0000 UTC m=+1350.952014292" Feb 16 10:08:13 crc kubenswrapper[4814]: I0216 10:08:13.272650 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=9.844434427 podStartE2EDuration="10.272624748s" podCreationTimestamp="2026-02-16 10:08:03 +0000 UTC" firstStartedPulling="2026-02-16 10:08:06.898808679 +0000 UTC m=+1344.591964859" lastFinishedPulling="2026-02-16 10:08:07.32699901 +0000 UTC m=+1345.020155180" observedRunningTime="2026-02-16 10:08:13.246017885 +0000 UTC m=+1350.939174065" watchObservedRunningTime="2026-02-16 10:08:13.272624748 +0000 UTC m=+1350.965780948" Feb 16 10:08:13 crc kubenswrapper[4814]: I0216 10:08:13.352742 4814 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poda6929b69-85c9-4084-9ff5-4e3a6af602dd"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda6929b69-85c9-4084-9ff5-4e3a6af602dd] : Timed out while waiting for systemd to remove kubepods-besteffort-poda6929b69_85c9_4084_9ff5_4e3a6af602dd.slice" Feb 16 10:08:13 crc kubenswrapper[4814]: E0216 10:08:13.352818 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poda6929b69-85c9-4084-9ff5-4e3a6af602dd] : unable to destroy cgroup paths for cgroup [kubepods besteffort poda6929b69-85c9-4084-9ff5-4e3a6af602dd] : Timed out while waiting for systemd to remove kubepods-besteffort-poda6929b69_85c9_4084_9ff5_4e3a6af602dd.slice" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" podUID="a6929b69-85c9-4084-9ff5-4e3a6af602dd" Feb 16 10:08:13 crc kubenswrapper[4814]: I0216 10:08:13.664476 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7964bd959-r5xpf"] Feb 16 10:08:14 crc kubenswrapper[4814]: I0216 10:08:14.099600 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-644c587556-hkrfd"] Feb 16 10:08:14 crc kubenswrapper[4814]: I0216 10:08:14.171766 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" event={"ID":"8500ec66-11d7-4826-be1d-0ab947450b54","Type":"ContainerStarted","Data":"c1138fe7c6af6158b18ec1441b00ef27f2e3cb0248182c2102446ee86d70253b"} Feb 16 10:08:14 crc kubenswrapper[4814]: I0216 10:08:14.189646 4814 generic.go:334] "Generic (PLEG): container finished" podID="7eb35cac-f0f0-45f9-8f80-63832e254210" containerID="ca8c67105dd1aa268f6da838fca8ba9e0800a0c12140e73f9091f74e6cb18d05" exitCode=143 Feb 16 10:08:14 crc kubenswrapper[4814]: I0216 10:08:14.189727 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9996885f-mkwrs" Feb 16 10:08:14 crc kubenswrapper[4814]: I0216 10:08:14.190037 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" podUID="ab7becc0-0d83-425b-9329-e3ddbedd82cd" containerName="dnsmasq-dns" containerID="cri-o://2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e" gracePeriod=10 Feb 16 10:08:14 crc kubenswrapper[4814]: I0216 10:08:14.190055 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7eb35cac-f0f0-45f9-8f80-63832e254210","Type":"ContainerDied","Data":"ca8c67105dd1aa268f6da838fca8ba9e0800a0c12140e73f9091f74e6cb18d05"} Feb 16 10:08:14 crc kubenswrapper[4814]: I0216 10:08:14.222796 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:08:14 crc kubenswrapper[4814]: I0216 10:08:14.226158 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.183:8080/\": dial tcp 10.217.0.183:8080: connect: connection refused" Feb 16 10:08:14 crc kubenswrapper[4814]: I0216 10:08:14.274408 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9996885f-mkwrs"] Feb 16 10:08:14 crc kubenswrapper[4814]: I0216 10:08:14.293152 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c9996885f-mkwrs"] Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.017775 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6929b69-85c9-4084-9ff5-4e3a6af602dd" path="/var/lib/kubelet/pods/a6929b69-85c9-4084-9ff5-4e3a6af602dd/volumes" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.239706 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.272961 4814 generic.go:334] "Generic (PLEG): container finished" podID="ab7becc0-0d83-425b-9329-e3ddbedd82cd" containerID="2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e" exitCode=0 Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.273123 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" event={"ID":"ab7becc0-0d83-425b-9329-e3ddbedd82cd","Type":"ContainerDied","Data":"2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e"} Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.273166 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" event={"ID":"ab7becc0-0d83-425b-9329-e3ddbedd82cd","Type":"ContainerDied","Data":"7d1e9df1dcf812ce4b39b71294fa2d97a94881d78f71d8696cb580d59ae7c4fe"} Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.273193 4814 scope.go:117] "RemoveContainer" containerID="2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.301847 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" event={"ID":"8500ec66-11d7-4826-be1d-0ab947450b54","Type":"ContainerDied","Data":"35b9e0b71d9eb33fc346aea759362fa8af9efe6384b2c808eaf20f8356b592d7"} Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.301861 4814 generic.go:334] "Generic (PLEG): container finished" podID="8500ec66-11d7-4826-be1d-0ab947450b54" containerID="35b9e0b71d9eb33fc346aea759362fa8af9efe6384b2c808eaf20f8356b592d7" exitCode=0 Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.338323 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644c587556-hkrfd" event={"ID":"4fc5e898-ae9e-40d2-9e50-9c2acc67b824","Type":"ContainerStarted","Data":"de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123"} Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.338399 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644c587556-hkrfd" event={"ID":"4fc5e898-ae9e-40d2-9e50-9c2acc67b824","Type":"ContainerStarted","Data":"554524c1434208bdaa3808c0837972728b664a5f145a90ccf49f1b60c60608f8"} Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.348176 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09e4bc37-1b9c-447e-93e3-b1278ca4d959","Type":"ContainerStarted","Data":"5b30f8005760cb00dcad7c3c0ce513166b8d193c803c08d9e502813e4a2c0ef2"} Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.349595 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.398056 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-nb\") pod \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.398226 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-config\") pod \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.398601 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvkn4\" (UniqueName: \"kubernetes.io/projected/ab7becc0-0d83-425b-9329-e3ddbedd82cd-kube-api-access-cvkn4\") pod \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.398657 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-sb\") pod \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.398749 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-svc\") pod \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.398812 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-swift-storage-0\") pod \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\" (UID: \"ab7becc0-0d83-425b-9329-e3ddbedd82cd\") " Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.440228 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.517278738 podStartE2EDuration="8.440209329s" podCreationTimestamp="2026-02-16 10:08:07 +0000 UTC" firstStartedPulling="2026-02-16 10:08:09.062565461 +0000 UTC m=+1346.755721641" lastFinishedPulling="2026-02-16 10:08:13.985496052 +0000 UTC m=+1351.678652232" observedRunningTime="2026-02-16 10:08:15.395663855 +0000 UTC m=+1353.088820045" watchObservedRunningTime="2026-02-16 10:08:15.440209329 +0000 UTC m=+1353.133365509" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.450991 4814 scope.go:117] "RemoveContainer" containerID="9491ce0bf4d38fdb5fc211069ff44461ba7995e5f4f23978e0a74cc5a532aa81" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.504411 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.505199 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.505230 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.505938 4814 scope.go:117] "RemoveContainer" containerID="3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.507181 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7becc0-0d83-425b-9329-e3ddbedd82cd-kube-api-access-cvkn4" (OuterVolumeSpecName: "kube-api-access-cvkn4") pod "ab7becc0-0d83-425b-9329-e3ddbedd82cd" (UID: "ab7becc0-0d83-425b-9329-e3ddbedd82cd"). InnerVolumeSpecName "kube-api-access-cvkn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.590790 4814 scope.go:117] "RemoveContainer" containerID="2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e" Feb 16 10:08:15 crc kubenswrapper[4814]: E0216 10:08:15.595956 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e\": container with ID starting with 2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e not found: ID does not exist" containerID="2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.596031 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e"} err="failed to get container status \"2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e\": rpc error: code = NotFound desc = could not find container \"2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e\": container with ID starting with 2a123559d2892a2febe539f29e2c496112c96ddfa84eb0755e16d892a20c722e not found: ID does not exist" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.596065 4814 scope.go:117] "RemoveContainer" containerID="9491ce0bf4d38fdb5fc211069ff44461ba7995e5f4f23978e0a74cc5a532aa81" Feb 16 10:08:15 crc kubenswrapper[4814]: E0216 10:08:15.601779 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9491ce0bf4d38fdb5fc211069ff44461ba7995e5f4f23978e0a74cc5a532aa81\": container with ID starting with 9491ce0bf4d38fdb5fc211069ff44461ba7995e5f4f23978e0a74cc5a532aa81 not found: ID does not exist" containerID="9491ce0bf4d38fdb5fc211069ff44461ba7995e5f4f23978e0a74cc5a532aa81" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.601860 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9491ce0bf4d38fdb5fc211069ff44461ba7995e5f4f23978e0a74cc5a532aa81"} err="failed to get container status \"9491ce0bf4d38fdb5fc211069ff44461ba7995e5f4f23978e0a74cc5a532aa81\": rpc error: code = NotFound desc = could not find container \"9491ce0bf4d38fdb5fc211069ff44461ba7995e5f4f23978e0a74cc5a532aa81\": container with ID starting with 9491ce0bf4d38fdb5fc211069ff44461ba7995e5f4f23978e0a74cc5a532aa81 not found: ID does not exist" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.609215 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvkn4\" (UniqueName: \"kubernetes.io/projected/ab7becc0-0d83-425b-9329-e3ddbedd82cd-kube-api-access-cvkn4\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.936416 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ab7becc0-0d83-425b-9329-e3ddbedd82cd" (UID: "ab7becc0-0d83-425b-9329-e3ddbedd82cd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.947705 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-config" (OuterVolumeSpecName: "config") pod "ab7becc0-0d83-425b-9329-e3ddbedd82cd" (UID: "ab7becc0-0d83-425b-9329-e3ddbedd82cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.995510 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-655ddb8b77-xt84d"] Feb 16 10:08:15 crc kubenswrapper[4814]: E0216 10:08:15.996514 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7becc0-0d83-425b-9329-e3ddbedd82cd" containerName="dnsmasq-dns" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.996583 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7becc0-0d83-425b-9329-e3ddbedd82cd" containerName="dnsmasq-dns" Feb 16 10:08:15 crc kubenswrapper[4814]: E0216 10:08:15.996611 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7becc0-0d83-425b-9329-e3ddbedd82cd" containerName="init" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.996618 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7becc0-0d83-425b-9329-e3ddbedd82cd" containerName="init" Feb 16 10:08:15 crc kubenswrapper[4814]: I0216 10:08:15.996978 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7becc0-0d83-425b-9329-e3ddbedd82cd" containerName="dnsmasq-dns" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:15.998256 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab7becc0-0d83-425b-9329-e3ddbedd82cd" (UID: "ab7becc0-0d83-425b-9329-e3ddbedd82cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:15.999019 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.003844 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.003942 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab7becc0-0d83-425b-9329-e3ddbedd82cd" (UID: "ab7becc0-0d83-425b-9329-e3ddbedd82cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.003957 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.018354 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab7becc0-0d83-425b-9329-e3ddbedd82cd" (UID: "ab7becc0-0d83-425b-9329-e3ddbedd82cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.024259 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-655ddb8b77-xt84d"] Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.037778 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.037809 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.037822 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.037836 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.037848 4814 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab7becc0-0d83-425b-9329-e3ddbedd82cd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.142750 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scvcz\" (UniqueName: \"kubernetes.io/projected/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-kube-api-access-scvcz\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.142916 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-httpd-config\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.143045 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-combined-ca-bundle\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.143133 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-internal-tls-certs\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.143274 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-ovndb-tls-certs\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.143424 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-config\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.144010 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-public-tls-certs\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.248793 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-public-tls-certs\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.248872 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scvcz\" (UniqueName: \"kubernetes.io/projected/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-kube-api-access-scvcz\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.248952 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-httpd-config\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.250753 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-combined-ca-bundle\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.251570 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-internal-tls-certs\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.251648 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-ovndb-tls-certs\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.251700 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-config\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.276990 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-config\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.279247 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-public-tls-certs\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.284972 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-httpd-config\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.291236 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-internal-tls-certs\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.292229 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-combined-ca-bundle\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.298395 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-ovndb-tls-certs\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.312673 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scvcz\" (UniqueName: \"kubernetes.io/projected/6331cc3a-ed6b-4e28-8cb4-544f16da5f8e-kube-api-access-scvcz\") pod \"neutron-655ddb8b77-xt84d\" (UID: \"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e\") " pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.377711 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7586bc8799-4lnds" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.398185 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" event={"ID":"8500ec66-11d7-4826-be1d-0ab947450b54","Type":"ContainerStarted","Data":"2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528"} Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.398317 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.408040 4814 generic.go:334] "Generic (PLEG): container finished" podID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerID="1d0320c0740514ef86a76b37845837b8e5e1d45feb3b526d15dbecec7f4b92d8" exitCode=0 Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.408233 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f","Type":"ContainerDied","Data":"1d0320c0740514ef86a76b37845837b8e5e1d45feb3b526d15dbecec7f4b92d8"} Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.409919 4814 scope.go:117] "RemoveContainer" containerID="1d0320c0740514ef86a76b37845837b8e5e1d45feb3b526d15dbecec7f4b92d8" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.428676 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644c587556-hkrfd" event={"ID":"4fc5e898-ae9e-40d2-9e50-9c2acc67b824","Type":"ContainerStarted","Data":"9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185"} Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.428750 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.434528 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" podStartSLOduration=4.434493867 podStartE2EDuration="4.434493867s" podCreationTimestamp="2026-02-16 10:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:16.431252923 +0000 UTC m=+1354.124409123" watchObservedRunningTime="2026-02-16 10:08:16.434493867 +0000 UTC m=+1354.127650047" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.435341 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.447720 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"88895e94-c6c9-4622-b6eb-94982898ac2b","Type":"ContainerStarted","Data":"58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353"} Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.564136 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-644c587556-hkrfd" podStartSLOduration=4.564101187 podStartE2EDuration="4.564101187s" podCreationTimestamp="2026-02-16 10:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:16.504021483 +0000 UTC m=+1354.197177663" watchObservedRunningTime="2026-02-16 10:08:16.564101187 +0000 UTC m=+1354.257257367" Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.580655 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7586bc8799-4lnds"] Feb 16 10:08:16 crc kubenswrapper[4814]: I0216 10:08:16.596994 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7586bc8799-4lnds"] Feb 16 10:08:17 crc kubenswrapper[4814]: I0216 10:08:17.016944 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab7becc0-0d83-425b-9329-e3ddbedd82cd" path="/var/lib/kubelet/pods/ab7becc0-0d83-425b-9329-e3ddbedd82cd/volumes" Feb 16 10:08:17 crc kubenswrapper[4814]: I0216 10:08:17.361152 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f95b74b5b-mpwlg" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 16 10:08:17 crc kubenswrapper[4814]: I0216 10:08:17.415157 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-655ddb8b77-xt84d"] Feb 16 10:08:17 crc kubenswrapper[4814]: I0216 10:08:17.540680 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655ddb8b77-xt84d" event={"ID":"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e","Type":"ContainerStarted","Data":"d1dd958187239903ab8f4d1b6b7ae1a31261808733a1c48cec02d8e9c3c557b8"} Feb 16 10:08:17 crc kubenswrapper[4814]: I0216 10:08:17.561256 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:18 crc kubenswrapper[4814]: I0216 10:08:18.413171 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-846869f756-srzgg" Feb 16 10:08:18 crc kubenswrapper[4814]: I0216 10:08:18.452319 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-846869f756-srzgg" Feb 16 10:08:18 crc kubenswrapper[4814]: I0216 10:08:18.469175 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fc9647d64-z5jk2" Feb 16 10:08:18 crc kubenswrapper[4814]: I0216 10:08:18.579797 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-865d97dbf4-rmb8f"] Feb 16 10:08:18 crc kubenswrapper[4814]: I0216 10:08:18.581235 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-865d97dbf4-rmb8f" podUID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerName="barbican-api-log" containerID="cri-o://0dba60d68feeea67b94eaef713ea6b7cd1898b94fcf8eec3d78c3a361bf23851" gracePeriod=30 Feb 16 10:08:18 crc kubenswrapper[4814]: I0216 10:08:18.581959 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-865d97dbf4-rmb8f" podUID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerName="barbican-api" containerID="cri-o://6a02836f22987781418bb24f46eb67e0cbe4a52bf4842cb5d70ddaa7b7e84213" gracePeriod=30 Feb 16 10:08:18 crc kubenswrapper[4814]: I0216 10:08:18.603508 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f","Type":"ContainerStarted","Data":"4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e"} Feb 16 10:08:18 crc kubenswrapper[4814]: I0216 10:08:18.628228 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655ddb8b77-xt84d" event={"ID":"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e","Type":"ContainerStarted","Data":"0dbffb03e77880f23b58224a73a9c3469e85c0fcff79d60cc4f4d6e05fafbafb"} Feb 16 10:08:18 crc kubenswrapper[4814]: I0216 10:08:18.628293 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-655ddb8b77-xt84d" event={"ID":"6331cc3a-ed6b-4e28-8cb4-544f16da5f8e","Type":"ContainerStarted","Data":"abe785c03fde7ab41009530c5b0724d149ae7411eeb0abe251e11440ba571505"} Feb 16 10:08:18 crc kubenswrapper[4814]: I0216 10:08:18.629841 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:18 crc kubenswrapper[4814]: I0216 10:08:18.695189 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-655ddb8b77-xt84d" podStartSLOduration=3.695155563 podStartE2EDuration="3.695155563s" podCreationTimestamp="2026-02-16 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:18.655390326 +0000 UTC m=+1356.348546516" watchObservedRunningTime="2026-02-16 10:08:18.695155563 +0000 UTC m=+1356.388311743" Feb 16 10:08:19 crc kubenswrapper[4814]: I0216 10:08:19.219834 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:08:19 crc kubenswrapper[4814]: I0216 10:08:19.648674 4814 generic.go:334] "Generic (PLEG): container finished" podID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerID="0dba60d68feeea67b94eaef713ea6b7cd1898b94fcf8eec3d78c3a361bf23851" exitCode=143 Feb 16 10:08:19 crc kubenswrapper[4814]: I0216 10:08:19.648795 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865d97dbf4-rmb8f" event={"ID":"917e665e-dc47-4b2d-9f9f-32896670a6f6","Type":"ContainerDied","Data":"0dba60d68feeea67b94eaef713ea6b7cd1898b94fcf8eec3d78c3a361bf23851"} Feb 16 10:08:20 crc kubenswrapper[4814]: I0216 10:08:20.489020 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-fc9647d64-z5jk2" podUID="3f3dfade-0392-451d-85d6-cf886a408bb4" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.182:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 10:08:21 crc kubenswrapper[4814]: I0216 10:08:21.206604 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:08:21 crc kubenswrapper[4814]: I0216 10:08:21.852433 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-b595588cb-jj9fp" Feb 16 10:08:21 crc kubenswrapper[4814]: I0216 10:08:21.926656 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-846869f756-srzgg"] Feb 16 10:08:21 crc kubenswrapper[4814]: I0216 10:08:21.926933 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-846869f756-srzgg" podUID="6d18d3d4-1253-4949-a6b0-42c6bd32b340" containerName="placement-log" containerID="cri-o://302182948ffc83c2dfcace7fa5feea470265cfe21c390d4ccc310632a6428e7e" gracePeriod=30 Feb 16 10:08:21 crc kubenswrapper[4814]: I0216 10:08:21.927406 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-846869f756-srzgg" podUID="6d18d3d4-1253-4949-a6b0-42c6bd32b340" containerName="placement-api" containerID="cri-o://fcc2b19f83f34873671e1de09f7da68afd069c43d6617b3b09790de797982cc1" gracePeriod=30 Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.183793 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-865d97dbf4-rmb8f" podUID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.181:9311/healthcheck\": read tcp 10.217.0.2:47774->10.217.0.181:9311: read: connection reset by peer" Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.184565 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-865d97dbf4-rmb8f" podUID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.181:9311/healthcheck\": read tcp 10.217.0.2:47766->10.217.0.181:9311: read: connection reset by peer" Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.361134 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-8669966799-gwc6g" Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.382529 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-fc9647d64-z5jk2" podUID="3f3dfade-0392-451d-85d6-cf886a408bb4" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.182:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.736559 4814 generic.go:334] "Generic (PLEG): container finished" podID="6d18d3d4-1253-4949-a6b0-42c6bd32b340" containerID="302182948ffc83c2dfcace7fa5feea470265cfe21c390d4ccc310632a6428e7e" exitCode=143 Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.736695 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846869f756-srzgg" event={"ID":"6d18d3d4-1253-4949-a6b0-42c6bd32b340","Type":"ContainerDied","Data":"302182948ffc83c2dfcace7fa5feea470265cfe21c390d4ccc310632a6428e7e"} Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.739605 4814 generic.go:334] "Generic (PLEG): container finished" podID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerID="6a02836f22987781418bb24f46eb67e0cbe4a52bf4842cb5d70ddaa7b7e84213" exitCode=0 Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.739959 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865d97dbf4-rmb8f" event={"ID":"917e665e-dc47-4b2d-9f9f-32896670a6f6","Type":"ContainerDied","Data":"6a02836f22987781418bb24f46eb67e0cbe4a52bf4842cb5d70ddaa7b7e84213"} Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.787766 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.876497 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9476bf7d5-wqwks"] Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.877261 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" podUID="5328dae7-ac38-4d55-aa96-b7a3387cb13f" containerName="dnsmasq-dns" containerID="cri-o://b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9" gracePeriod=10 Feb 16 10:08:22 crc kubenswrapper[4814]: I0216 10:08:22.985936 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.150669 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snm7q\" (UniqueName: \"kubernetes.io/projected/917e665e-dc47-4b2d-9f9f-32896670a6f6-kube-api-access-snm7q\") pod \"917e665e-dc47-4b2d-9f9f-32896670a6f6\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.150759 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/917e665e-dc47-4b2d-9f9f-32896670a6f6-logs\") pod \"917e665e-dc47-4b2d-9f9f-32896670a6f6\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.150917 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data\") pod \"917e665e-dc47-4b2d-9f9f-32896670a6f6\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.151089 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-combined-ca-bundle\") pod \"917e665e-dc47-4b2d-9f9f-32896670a6f6\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.151254 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data-custom\") pod \"917e665e-dc47-4b2d-9f9f-32896670a6f6\" (UID: \"917e665e-dc47-4b2d-9f9f-32896670a6f6\") " Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.160927 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/917e665e-dc47-4b2d-9f9f-32896670a6f6-kube-api-access-snm7q" (OuterVolumeSpecName: "kube-api-access-snm7q") pod "917e665e-dc47-4b2d-9f9f-32896670a6f6" (UID: "917e665e-dc47-4b2d-9f9f-32896670a6f6"). InnerVolumeSpecName "kube-api-access-snm7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.177699 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/917e665e-dc47-4b2d-9f9f-32896670a6f6-logs" (OuterVolumeSpecName: "logs") pod "917e665e-dc47-4b2d-9f9f-32896670a6f6" (UID: "917e665e-dc47-4b2d-9f9f-32896670a6f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.184838 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "917e665e-dc47-4b2d-9f9f-32896670a6f6" (UID: "917e665e-dc47-4b2d-9f9f-32896670a6f6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.225872 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data" (OuterVolumeSpecName: "config-data") pod "917e665e-dc47-4b2d-9f9f-32896670a6f6" (UID: "917e665e-dc47-4b2d-9f9f-32896670a6f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.250696 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "917e665e-dc47-4b2d-9f9f-32896670a6f6" (UID: "917e665e-dc47-4b2d-9f9f-32896670a6f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.254670 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.254740 4814 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.254755 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snm7q\" (UniqueName: \"kubernetes.io/projected/917e665e-dc47-4b2d-9f9f-32896670a6f6-kube-api-access-snm7q\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.254772 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/917e665e-dc47-4b2d-9f9f-32896670a6f6-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.254786 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/917e665e-dc47-4b2d-9f9f-32896670a6f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.590495 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.666932 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-nb\") pod \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.667078 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsfbb\" (UniqueName: \"kubernetes.io/projected/5328dae7-ac38-4d55-aa96-b7a3387cb13f-kube-api-access-gsfbb\") pod \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.667398 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-swift-storage-0\") pod \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.667471 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-svc\") pod \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.667627 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-config\") pod \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.667658 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-sb\") pod \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\" (UID: \"5328dae7-ac38-4d55-aa96-b7a3387cb13f\") " Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.689048 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5328dae7-ac38-4d55-aa96-b7a3387cb13f-kube-api-access-gsfbb" (OuterVolumeSpecName: "kube-api-access-gsfbb") pod "5328dae7-ac38-4d55-aa96-b7a3387cb13f" (UID: "5328dae7-ac38-4d55-aa96-b7a3387cb13f"). InnerVolumeSpecName "kube-api-access-gsfbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.745794 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5328dae7-ac38-4d55-aa96-b7a3387cb13f" (UID: "5328dae7-ac38-4d55-aa96-b7a3387cb13f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.770799 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.770844 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsfbb\" (UniqueName: \"kubernetes.io/projected/5328dae7-ac38-4d55-aa96-b7a3387cb13f-kube-api-access-gsfbb\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.774912 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-865d97dbf4-rmb8f" event={"ID":"917e665e-dc47-4b2d-9f9f-32896670a6f6","Type":"ContainerDied","Data":"f3f5d8c9fbedb172f27208aeb62da0921f785fc4b1d6ad99a8f78ab630863b51"} Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.775030 4814 scope.go:117] "RemoveContainer" containerID="6a02836f22987781418bb24f46eb67e0cbe4a52bf4842cb5d70ddaa7b7e84213" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.775279 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-865d97dbf4-rmb8f" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.776371 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5328dae7-ac38-4d55-aa96-b7a3387cb13f" (UID: "5328dae7-ac38-4d55-aa96-b7a3387cb13f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.784519 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5328dae7-ac38-4d55-aa96-b7a3387cb13f" (UID: "5328dae7-ac38-4d55-aa96-b7a3387cb13f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.792893 4814 generic.go:334] "Generic (PLEG): container finished" podID="5328dae7-ac38-4d55-aa96-b7a3387cb13f" containerID="b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9" exitCode=0 Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.792995 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" event={"ID":"5328dae7-ac38-4d55-aa96-b7a3387cb13f","Type":"ContainerDied","Data":"b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9"} Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.793046 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" event={"ID":"5328dae7-ac38-4d55-aa96-b7a3387cb13f","Type":"ContainerDied","Data":"e7da8f99f7d79b8c3e33215ef7aa2b4fc68339d6c8ab9c06e5853e9059c3a2fb"} Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.793126 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9476bf7d5-wqwks" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.809320 4814 generic.go:334] "Generic (PLEG): container finished" podID="6d18d3d4-1253-4949-a6b0-42c6bd32b340" containerID="fcc2b19f83f34873671e1de09f7da68afd069c43d6617b3b09790de797982cc1" exitCode=0 Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.809371 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846869f756-srzgg" event={"ID":"6d18d3d4-1253-4949-a6b0-42c6bd32b340","Type":"ContainerDied","Data":"fcc2b19f83f34873671e1de09f7da68afd069c43d6617b3b09790de797982cc1"} Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.814015 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-config" (OuterVolumeSpecName: "config") pod "5328dae7-ac38-4d55-aa96-b7a3387cb13f" (UID: "5328dae7-ac38-4d55-aa96-b7a3387cb13f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.819255 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5328dae7-ac38-4d55-aa96-b7a3387cb13f" (UID: "5328dae7-ac38-4d55-aa96-b7a3387cb13f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.843037 4814 scope.go:117] "RemoveContainer" containerID="0dba60d68feeea67b94eaef713ea6b7cd1898b94fcf8eec3d78c3a361bf23851" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.862890 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-865d97dbf4-rmb8f"] Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.873636 4814 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.873668 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.873678 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.873688 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5328dae7-ac38-4d55-aa96-b7a3387cb13f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.888434 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-865d97dbf4-rmb8f"] Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.900076 4814 scope.go:117] "RemoveContainer" containerID="b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9" Feb 16 10:08:23 crc kubenswrapper[4814]: I0216 10:08:23.947802 4814 scope.go:117] "RemoveContainer" containerID="0c6b6f7d375750fb058602d61231a032086938e536f163307aca36eae6c6521f" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.064500 4814 scope.go:117] "RemoveContainer" containerID="b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9" Feb 16 10:08:24 crc kubenswrapper[4814]: E0216 10:08:24.067957 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9\": container with ID starting with b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9 not found: ID does not exist" containerID="b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.068029 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9"} err="failed to get container status \"b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9\": rpc error: code = NotFound desc = could not find container \"b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9\": container with ID starting with b817695a95b1194b88b39b64de12c91928b90d52799f980281873bbe0c8e21a9 not found: ID does not exist" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.068078 4814 scope.go:117] "RemoveContainer" containerID="0c6b6f7d375750fb058602d61231a032086938e536f163307aca36eae6c6521f" Feb 16 10:08:24 crc kubenswrapper[4814]: E0216 10:08:24.080293 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c6b6f7d375750fb058602d61231a032086938e536f163307aca36eae6c6521f\": container with ID starting with 0c6b6f7d375750fb058602d61231a032086938e536f163307aca36eae6c6521f not found: ID does not exist" containerID="0c6b6f7d375750fb058602d61231a032086938e536f163307aca36eae6c6521f" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.080349 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c6b6f7d375750fb058602d61231a032086938e536f163307aca36eae6c6521f"} err="failed to get container status \"0c6b6f7d375750fb058602d61231a032086938e536f163307aca36eae6c6521f\": rpc error: code = NotFound desc = could not find container \"0c6b6f7d375750fb058602d61231a032086938e536f163307aca36eae6c6521f\": container with ID starting with 0c6b6f7d375750fb058602d61231a032086938e536f163307aca36eae6c6521f not found: ID does not exist" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.215867 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9476bf7d5-wqwks"] Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.229820 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9476bf7d5-wqwks"] Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.329344 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-846869f756-srzgg" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.505858 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-scripts\") pod \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.506061 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-internal-tls-certs\") pod \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.506250 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-config-data\") pod \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.506277 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-public-tls-certs\") pod \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.506367 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-combined-ca-bundle\") pod \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.506407 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d18d3d4-1253-4949-a6b0-42c6bd32b340-logs\") pod \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.506434 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snbvl\" (UniqueName: \"kubernetes.io/projected/6d18d3d4-1253-4949-a6b0-42c6bd32b340-kube-api-access-snbvl\") pod \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\" (UID: \"6d18d3d4-1253-4949-a6b0-42c6bd32b340\") " Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.507789 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d18d3d4-1253-4949-a6b0-42c6bd32b340-logs" (OuterVolumeSpecName: "logs") pod "6d18d3d4-1253-4949-a6b0-42c6bd32b340" (UID: "6d18d3d4-1253-4949-a6b0-42c6bd32b340"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.531138 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d18d3d4-1253-4949-a6b0-42c6bd32b340-kube-api-access-snbvl" (OuterVolumeSpecName: "kube-api-access-snbvl") pod "6d18d3d4-1253-4949-a6b0-42c6bd32b340" (UID: "6d18d3d4-1253-4949-a6b0-42c6bd32b340"). InnerVolumeSpecName "kube-api-access-snbvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.534850 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-scripts" (OuterVolumeSpecName: "scripts") pod "6d18d3d4-1253-4949-a6b0-42c6bd32b340" (UID: "6d18d3d4-1253-4949-a6b0-42c6bd32b340"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.610497 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d18d3d4-1253-4949-a6b0-42c6bd32b340-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.610549 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snbvl\" (UniqueName: \"kubernetes.io/projected/6d18d3d4-1253-4949-a6b0-42c6bd32b340-kube-api-access-snbvl\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.610562 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.667556 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-config-data" (OuterVolumeSpecName: "config-data") pod "6d18d3d4-1253-4949-a6b0-42c6bd32b340" (UID: "6d18d3d4-1253-4949-a6b0-42c6bd32b340"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.675069 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d18d3d4-1253-4949-a6b0-42c6bd32b340" (UID: "6d18d3d4-1253-4949-a6b0-42c6bd32b340"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.710527 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.712814 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.712846 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.713167 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6d18d3d4-1253-4949-a6b0-42c6bd32b340" (UID: "6d18d3d4-1253-4949-a6b0-42c6bd32b340"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.733421 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6d18d3d4-1253-4949-a6b0-42c6bd32b340" (UID: "6d18d3d4-1253-4949-a6b0-42c6bd32b340"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.765737 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="7eb35cac-f0f0-45f9-8f80-63832e254210" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.185:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.774270 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.819149 4814 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.819187 4814 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d18d3d4-1253-4949-a6b0-42c6bd32b340-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.829570 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-846869f756-srzgg" event={"ID":"6d18d3d4-1253-4949-a6b0-42c6bd32b340","Type":"ContainerDied","Data":"5799022d0c8ccf79e97649148c97ab28d222b778afddac02877f6a5d0955d80b"} Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.829633 4814 scope.go:117] "RemoveContainer" containerID="fcc2b19f83f34873671e1de09f7da68afd069c43d6617b3b09790de797982cc1" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.829760 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-846869f756-srzgg" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.851982 4814 generic.go:334] "Generic (PLEG): container finished" podID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerID="4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e" exitCode=0 Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.852040 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f","Type":"ContainerDied","Data":"4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e"} Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.852232 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerName="probe" containerID="cri-o://da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e" gracePeriod=30 Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.880810 4814 scope.go:117] "RemoveContainer" containerID="302182948ffc83c2dfcace7fa5feea470265cfe21c390d4ccc310632a6428e7e" Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.882783 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-846869f756-srzgg"] Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.911587 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-846869f756-srzgg"] Feb 16 10:08:24 crc kubenswrapper[4814]: I0216 10:08:24.935796 4814 scope.go:117] "RemoveContainer" containerID="1d0320c0740514ef86a76b37845837b8e5e1d45feb3b526d15dbecec7f4b92d8" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.094466 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5328dae7-ac38-4d55-aa96-b7a3387cb13f" path="/var/lib/kubelet/pods/5328dae7-ac38-4d55-aa96-b7a3387cb13f/volumes" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.096895 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d18d3d4-1253-4949-a6b0-42c6bd32b340" path="/var/lib/kubelet/pods/6d18d3d4-1253-4949-a6b0-42c6bd32b340/volumes" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.097817 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="917e665e-dc47-4b2d-9f9f-32896670a6f6" path="/var/lib/kubelet/pods/917e665e-dc47-4b2d-9f9f-32896670a6f6/volumes" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.099696 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 10:08:25 crc kubenswrapper[4814]: E0216 10:08:25.100206 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5328dae7-ac38-4d55-aa96-b7a3387cb13f" containerName="dnsmasq-dns" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.100223 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5328dae7-ac38-4d55-aa96-b7a3387cb13f" containerName="dnsmasq-dns" Feb 16 10:08:25 crc kubenswrapper[4814]: E0216 10:08:25.100252 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerName="barbican-api" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.100261 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerName="barbican-api" Feb 16 10:08:25 crc kubenswrapper[4814]: E0216 10:08:25.100286 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerName="barbican-api-log" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.100296 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerName="barbican-api-log" Feb 16 10:08:25 crc kubenswrapper[4814]: E0216 10:08:25.100308 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d18d3d4-1253-4949-a6b0-42c6bd32b340" containerName="placement-api" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.100319 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d18d3d4-1253-4949-a6b0-42c6bd32b340" containerName="placement-api" Feb 16 10:08:25 crc kubenswrapper[4814]: E0216 10:08:25.100327 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d18d3d4-1253-4949-a6b0-42c6bd32b340" containerName="placement-log" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.100335 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d18d3d4-1253-4949-a6b0-42c6bd32b340" containerName="placement-log" Feb 16 10:08:25 crc kubenswrapper[4814]: E0216 10:08:25.100356 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5328dae7-ac38-4d55-aa96-b7a3387cb13f" containerName="init" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.100364 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="5328dae7-ac38-4d55-aa96-b7a3387cb13f" containerName="init" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.100834 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="5328dae7-ac38-4d55-aa96-b7a3387cb13f" containerName="dnsmasq-dns" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.100858 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerName="barbican-api" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.100897 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="917e665e-dc47-4b2d-9f9f-32896670a6f6" containerName="barbican-api-log" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.100999 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d18d3d4-1253-4949-a6b0-42c6bd32b340" containerName="placement-api" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.101013 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d18d3d4-1253-4949-a6b0-42c6bd32b340" containerName="placement-log" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.102712 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.102825 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.106877 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.107156 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.116930 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xfwnw" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.240625 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6kzk\" (UniqueName: \"kubernetes.io/projected/8a5610e4-be60-4c16-9911-e06986025235-kube-api-access-w6kzk\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.240704 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5610e4-be60-4c16-9911-e06986025235-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.240779 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a5610e4-be60-4c16-9911-e06986025235-openstack-config-secret\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.240817 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a5610e4-be60-4c16-9911-e06986025235-openstack-config\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.343877 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a5610e4-be60-4c16-9911-e06986025235-openstack-config-secret\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.343978 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a5610e4-be60-4c16-9911-e06986025235-openstack-config\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.344232 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6kzk\" (UniqueName: \"kubernetes.io/projected/8a5610e4-be60-4c16-9911-e06986025235-kube-api-access-w6kzk\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.344288 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5610e4-be60-4c16-9911-e06986025235-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.344895 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8a5610e4-be60-4c16-9911-e06986025235-openstack-config\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.351121 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8a5610e4-be60-4c16-9911-e06986025235-openstack-config-secret\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.353563 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5610e4-be60-4c16-9911-e06986025235-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.369639 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6kzk\" (UniqueName: \"kubernetes.io/projected/8a5610e4-be60-4c16-9911-e06986025235-kube-api-access-w6kzk\") pod \"openstackclient\" (UID: \"8a5610e4-be60-4c16-9911-e06986025235\") " pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.457545 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.503997 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:25 crc kubenswrapper[4814]: E0216 10:08:25.507147 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353 is running failed: container process not found" containerID="58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 16 10:08:25 crc kubenswrapper[4814]: E0216 10:08:25.507925 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353 is running failed: container process not found" containerID="58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 16 10:08:25 crc kubenswrapper[4814]: E0216 10:08:25.508270 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353 is running failed: container process not found" containerID="58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Feb 16 10:08:25 crc kubenswrapper[4814]: E0216 10:08:25.508357 4814 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353 is running failed: container process not found" probeType="Startup" pod="openstack/watcher-decision-engine-0" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.891824 4814 generic.go:334] "Generic (PLEG): container finished" podID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerID="58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353" exitCode=1 Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.892382 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"88895e94-c6c9-4622-b6eb-94982898ac2b","Type":"ContainerDied","Data":"58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353"} Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.892440 4814 scope.go:117] "RemoveContainer" containerID="3a6bb755d8f78491806e1fb541ffeaff0c46d8a1830919861e6e96abfed0b7f9" Feb 16 10:08:25 crc kubenswrapper[4814]: I0216 10:08:25.893321 4814 scope.go:117] "RemoveContainer" containerID="58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353" Feb 16 10:08:25 crc kubenswrapper[4814]: E0216 10:08:25.893737 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(88895e94-c6c9-4622-b6eb-94982898ac2b)\"" pod="openstack/watcher-decision-engine-0" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.106912 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.740676 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.814808 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data-custom\") pod \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.814866 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-combined-ca-bundle\") pod \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.814891 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-scripts\") pod \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.814938 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data\") pod \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.815063 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lhxg\" (UniqueName: \"kubernetes.io/projected/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-kube-api-access-6lhxg\") pod \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.815086 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-etc-machine-id\") pod \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\" (UID: \"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f\") " Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.815474 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" (UID: "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.839910 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" (UID: "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.839997 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-scripts" (OuterVolumeSpecName: "scripts") pod "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" (UID: "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.843428 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-kube-api-access-6lhxg" (OuterVolumeSpecName: "kube-api-access-6lhxg") pod "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" (UID: "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f"). InnerVolumeSpecName "kube-api-access-6lhxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.917297 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lhxg\" (UniqueName: \"kubernetes.io/projected/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-kube-api-access-6lhxg\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.917670 4814 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.917679 4814 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.917690 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.930376 4814 generic.go:334] "Generic (PLEG): container finished" podID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerID="da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e" exitCode=0 Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.930453 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f","Type":"ContainerDied","Data":"da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e"} Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.930585 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a1eb2dbf-6a0d-46b4-b470-dffaa69f510f","Type":"ContainerDied","Data":"e9eaf448867b99c298de7b5ab3b408b52b45fdcd683e6c473cf3b55210a2c9de"} Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.930615 4814 scope.go:117] "RemoveContainer" containerID="4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.930972 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.932128 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8a5610e4-be60-4c16-9911-e06986025235","Type":"ContainerStarted","Data":"029bcfed80433eb53387ecf525649656ee486ce012723a0125f32419ae310c65"} Feb 16 10:08:26 crc kubenswrapper[4814]: I0216 10:08:26.969320 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" (UID: "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.011479 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data" (OuterVolumeSpecName: "config-data") pod "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" (UID: "a1eb2dbf-6a0d-46b4-b470-dffaa69f510f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.018741 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.018785 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.094646 4814 scope.go:117] "RemoveContainer" containerID="da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.138502 4814 scope.go:117] "RemoveContainer" containerID="4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e" Feb 16 10:08:27 crc kubenswrapper[4814]: E0216 10:08:27.139475 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e\": container with ID starting with 4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e not found: ID does not exist" containerID="4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.139551 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e"} err="failed to get container status \"4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e\": rpc error: code = NotFound desc = could not find container \"4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e\": container with ID starting with 4427414ef88bae4b8dddcec8882be69a61ecd67f90ab68aa25a932230370398e not found: ID does not exist" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.139589 4814 scope.go:117] "RemoveContainer" containerID="da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e" Feb 16 10:08:27 crc kubenswrapper[4814]: E0216 10:08:27.141350 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e\": container with ID starting with da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e not found: ID does not exist" containerID="da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.141410 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e"} err="failed to get container status \"da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e\": rpc error: code = NotFound desc = could not find container \"da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e\": container with ID starting with da698126e5cb6ea8728b844d968fae39536bcac13f29f31b97a19c578554d81e not found: ID does not exist" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.278064 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.295410 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.314881 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 10:08:27 crc kubenswrapper[4814]: E0216 10:08:27.315429 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerName="cinder-scheduler" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.315453 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerName="cinder-scheduler" Feb 16 10:08:27 crc kubenswrapper[4814]: E0216 10:08:27.315464 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerName="cinder-scheduler" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.315472 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerName="cinder-scheduler" Feb 16 10:08:27 crc kubenswrapper[4814]: E0216 10:08:27.315491 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerName="probe" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.315498 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerName="probe" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.319984 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerName="cinder-scheduler" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.320030 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerName="cinder-scheduler" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.320059 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" containerName="probe" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.325968 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.331677 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.339443 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4396e79-fda2-435d-ae1f-f92a838ea655-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.339512 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-scripts\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.339579 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gftvw\" (UniqueName: \"kubernetes.io/projected/c4396e79-fda2-435d-ae1f-f92a838ea655-kube-api-access-gftvw\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.339629 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.339649 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-config-data\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.339729 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.343880 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.363766 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f95b74b5b-mpwlg" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.363968 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.441422 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-scripts\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.441476 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gftvw\" (UniqueName: \"kubernetes.io/projected/c4396e79-fda2-435d-ae1f-f92a838ea655-kube-api-access-gftvw\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.441523 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.441556 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-config-data\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.441627 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.441699 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4396e79-fda2-435d-ae1f-f92a838ea655-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.441782 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4396e79-fda2-435d-ae1f-f92a838ea655-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.451909 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-scripts\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.454292 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.454338 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-config-data\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.455079 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4396e79-fda2-435d-ae1f-f92a838ea655-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.465627 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gftvw\" (UniqueName: \"kubernetes.io/projected/c4396e79-fda2-435d-ae1f-f92a838ea655-kube-api-access-gftvw\") pod \"cinder-scheduler-0\" (UID: \"c4396e79-fda2-435d-ae1f-f92a838ea655\") " pod="openstack/cinder-scheduler-0" Feb 16 10:08:27 crc kubenswrapper[4814]: I0216 10:08:27.676481 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 10:08:28 crc kubenswrapper[4814]: I0216 10:08:28.207448 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 10:08:28 crc kubenswrapper[4814]: I0216 10:08:28.348864 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 10:08:29 crc kubenswrapper[4814]: I0216 10:08:29.032204 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1eb2dbf-6a0d-46b4-b470-dffaa69f510f" path="/var/lib/kubelet/pods/a1eb2dbf-6a0d-46b4-b470-dffaa69f510f/volumes" Feb 16 10:08:29 crc kubenswrapper[4814]: I0216 10:08:29.033734 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"21b82747464c3caecaa8873e3de1d9058af99ef4f2a0a3179275a35108f1b54e"} Feb 16 10:08:29 crc kubenswrapper[4814]: I0216 10:08:29.828135 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7zqmw"] Feb 16 10:08:29 crc kubenswrapper[4814]: I0216 10:08:29.831065 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:29 crc kubenswrapper[4814]: I0216 10:08:29.850287 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7zqmw"] Feb 16 10:08:29 crc kubenswrapper[4814]: I0216 10:08:29.945405 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnnk6\" (UniqueName: \"kubernetes.io/projected/7d9f6435-c405-4209-9a92-26f39daf2909-kube-api-access-rnnk6\") pod \"redhat-operators-7zqmw\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:29 crc kubenswrapper[4814]: I0216 10:08:29.945492 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-utilities\") pod \"redhat-operators-7zqmw\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:29 crc kubenswrapper[4814]: I0216 10:08:29.945927 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-catalog-content\") pod \"redhat-operators-7zqmw\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:30 crc kubenswrapper[4814]: I0216 10:08:30.048252 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnnk6\" (UniqueName: \"kubernetes.io/projected/7d9f6435-c405-4209-9a92-26f39daf2909-kube-api-access-rnnk6\") pod \"redhat-operators-7zqmw\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:30 crc kubenswrapper[4814]: I0216 10:08:30.048790 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-utilities\") pod \"redhat-operators-7zqmw\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:30 crc kubenswrapper[4814]: I0216 10:08:30.048949 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-catalog-content\") pod \"redhat-operators-7zqmw\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:30 crc kubenswrapper[4814]: I0216 10:08:30.049678 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-catalog-content\") pod \"redhat-operators-7zqmw\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:30 crc kubenswrapper[4814]: I0216 10:08:30.049960 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-utilities\") pod \"redhat-operators-7zqmw\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:30 crc kubenswrapper[4814]: I0216 10:08:30.071407 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnnk6\" (UniqueName: \"kubernetes.io/projected/7d9f6435-c405-4209-9a92-26f39daf2909-kube-api-access-rnnk6\") pod \"redhat-operators-7zqmw\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:30 crc kubenswrapper[4814]: I0216 10:08:30.073503 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"e85009511478b3116e41589c044c22598ed2a114954698156e570672dc2115a2"} Feb 16 10:08:30 crc kubenswrapper[4814]: I0216 10:08:30.073572 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"c8fe78619ae6dc22f80270563a7f862d207cdf27392b7272fd1b1de5bd079eec"} Feb 16 10:08:30 crc kubenswrapper[4814]: I0216 10:08:30.189030 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:30 crc kubenswrapper[4814]: I0216 10:08:30.763038 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7zqmw"] Feb 16 10:08:30 crc kubenswrapper[4814]: W0216 10:08:30.769974 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d9f6435_c405_4209_9a92_26f39daf2909.slice/crio-6c0a0c77b23b85647334332c5aab3c690c05ea754a89e8cae1dac7e515427ff5 WatchSource:0}: Error finding container 6c0a0c77b23b85647334332c5aab3c690c05ea754a89e8cae1dac7e515427ff5: Status 404 returned error can't find the container with id 6c0a0c77b23b85647334332c5aab3c690c05ea754a89e8cae1dac7e515427ff5 Feb 16 10:08:31 crc kubenswrapper[4814]: I0216 10:08:31.096693 4814 generic.go:334] "Generic (PLEG): container finished" podID="7d9f6435-c405-4209-9a92-26f39daf2909" containerID="bb9c120521c97784b3c008b06ee39456ae24a51cd194172c1a3cf6b5bca91331" exitCode=0 Feb 16 10:08:31 crc kubenswrapper[4814]: I0216 10:08:31.098005 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zqmw" event={"ID":"7d9f6435-c405-4209-9a92-26f39daf2909","Type":"ContainerDied","Data":"bb9c120521c97784b3c008b06ee39456ae24a51cd194172c1a3cf6b5bca91331"} Feb 16 10:08:31 crc kubenswrapper[4814]: I0216 10:08:31.098070 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zqmw" event={"ID":"7d9f6435-c405-4209-9a92-26f39daf2909","Type":"ContainerStarted","Data":"6c0a0c77b23b85647334332c5aab3c690c05ea754a89e8cae1dac7e515427ff5"} Feb 16 10:08:31 crc kubenswrapper[4814]: I0216 10:08:31.129314 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.129280531 podStartE2EDuration="4.129280531s" podCreationTimestamp="2026-02-16 10:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:31.126972509 +0000 UTC m=+1368.820128709" watchObservedRunningTime="2026-02-16 10:08:31.129280531 +0000 UTC m=+1368.822436711" Feb 16 10:08:32 crc kubenswrapper[4814]: I0216 10:08:32.676987 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.128588 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zqmw" event={"ID":"7d9f6435-c405-4209-9a92-26f39daf2909","Type":"ContainerStarted","Data":"29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48"} Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.536831 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-bf98696f9-fcvdv"] Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.538453 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.542555 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.550899 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.553021 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.563190 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-bf98696f9-fcvdv"] Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.669405 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2818c738-cd93-486f-8b95-3e0c60ec8b59-etc-swift\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.669478 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-public-tls-certs\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.669578 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-combined-ca-bundle\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.669635 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smk5v\" (UniqueName: \"kubernetes.io/projected/2818c738-cd93-486f-8b95-3e0c60ec8b59-kube-api-access-smk5v\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.669737 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-internal-tls-certs\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.669765 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-config-data\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.669805 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2818c738-cd93-486f-8b95-3e0c60ec8b59-run-httpd\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.669991 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2818c738-cd93-486f-8b95-3e0c60ec8b59-log-httpd\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.773105 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-internal-tls-certs\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.773175 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-config-data\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.773216 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2818c738-cd93-486f-8b95-3e0c60ec8b59-run-httpd\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.773253 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2818c738-cd93-486f-8b95-3e0c60ec8b59-log-httpd\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.773315 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2818c738-cd93-486f-8b95-3e0c60ec8b59-etc-swift\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.773351 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-public-tls-certs\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.773383 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-combined-ca-bundle\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.773430 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smk5v\" (UniqueName: \"kubernetes.io/projected/2818c738-cd93-486f-8b95-3e0c60ec8b59-kube-api-access-smk5v\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.775513 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2818c738-cd93-486f-8b95-3e0c60ec8b59-log-httpd\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.776936 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2818c738-cd93-486f-8b95-3e0c60ec8b59-run-httpd\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.793346 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-public-tls-certs\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.793748 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-internal-tls-certs\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.794115 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-config-data\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.795975 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2818c738-cd93-486f-8b95-3e0c60ec8b59-combined-ca-bundle\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.797131 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2818c738-cd93-486f-8b95-3e0c60ec8b59-etc-swift\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.799484 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smk5v\" (UniqueName: \"kubernetes.io/projected/2818c738-cd93-486f-8b95-3e0c60ec8b59-kube-api-access-smk5v\") pod \"swift-proxy-bf98696f9-fcvdv\" (UID: \"2818c738-cd93-486f-8b95-3e0c60ec8b59\") " pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:33 crc kubenswrapper[4814]: I0216 10:08:33.869134 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:34 crc kubenswrapper[4814]: I0216 10:08:34.150794 4814 generic.go:334] "Generic (PLEG): container finished" podID="7d9f6435-c405-4209-9a92-26f39daf2909" containerID="29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48" exitCode=0 Feb 16 10:08:34 crc kubenswrapper[4814]: I0216 10:08:34.150855 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zqmw" event={"ID":"7d9f6435-c405-4209-9a92-26f39daf2909","Type":"ContainerDied","Data":"29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48"} Feb 16 10:08:35 crc kubenswrapper[4814]: I0216 10:08:35.170095 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="c8fe78619ae6dc22f80270563a7f862d207cdf27392b7272fd1b1de5bd079eec" exitCode=0 Feb 16 10:08:35 crc kubenswrapper[4814]: I0216 10:08:35.170281 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"c8fe78619ae6dc22f80270563a7f862d207cdf27392b7272fd1b1de5bd079eec"} Feb 16 10:08:35 crc kubenswrapper[4814]: I0216 10:08:35.171743 4814 scope.go:117] "RemoveContainer" containerID="c8fe78619ae6dc22f80270563a7f862d207cdf27392b7272fd1b1de5bd079eec" Feb 16 10:08:35 crc kubenswrapper[4814]: I0216 10:08:35.504211 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:35 crc kubenswrapper[4814]: I0216 10:08:35.505205 4814 scope.go:117] "RemoveContainer" containerID="58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353" Feb 16 10:08:35 crc kubenswrapper[4814]: E0216 10:08:35.505448 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(88895e94-c6c9-4622-b6eb-94982898ac2b)\"" pod="openstack/watcher-decision-engine-0" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" Feb 16 10:08:35 crc kubenswrapper[4814]: I0216 10:08:35.755726 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:35 crc kubenswrapper[4814]: I0216 10:08:35.756311 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="ceilometer-central-agent" containerID="cri-o://805ed1d0288a989e457e87e6544f2d93b475252db5bcc019266755f409c4cfbc" gracePeriod=30 Feb 16 10:08:35 crc kubenswrapper[4814]: I0216 10:08:35.756492 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="proxy-httpd" containerID="cri-o://5b30f8005760cb00dcad7c3c0ce513166b8d193c803c08d9e502813e4a2c0ef2" gracePeriod=30 Feb 16 10:08:35 crc kubenswrapper[4814]: I0216 10:08:35.756592 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="sg-core" containerID="cri-o://b1c92bc3b8fdede80860622f35449e9319e34a32dd203031bf7605b4be07f8ed" gracePeriod=30 Feb 16 10:08:35 crc kubenswrapper[4814]: I0216 10:08:35.756653 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="ceilometer-notification-agent" containerID="cri-o://fe9f0292de218db8b924e2937d8a4bec9e47fb1649458f1348d2879e3452fcfa" gracePeriod=30 Feb 16 10:08:35 crc kubenswrapper[4814]: I0216 10:08:35.769620 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 10:08:36 crc kubenswrapper[4814]: I0216 10:08:36.195202 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f95b74b5b-mpwlg" event={"ID":"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40","Type":"ContainerDied","Data":"a3b7fbb9342bb2d0a8a281033cd00be9db237d5e9833458a68dc92e72f9e66ca"} Feb 16 10:08:36 crc kubenswrapper[4814]: I0216 10:08:36.195286 4814 generic.go:334] "Generic (PLEG): container finished" podID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerID="a3b7fbb9342bb2d0a8a281033cd00be9db237d5e9833458a68dc92e72f9e66ca" exitCode=137 Feb 16 10:08:36 crc kubenswrapper[4814]: I0216 10:08:36.202635 4814 generic.go:334] "Generic (PLEG): container finished" podID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerID="5b30f8005760cb00dcad7c3c0ce513166b8d193c803c08d9e502813e4a2c0ef2" exitCode=0 Feb 16 10:08:36 crc kubenswrapper[4814]: I0216 10:08:36.202670 4814 generic.go:334] "Generic (PLEG): container finished" podID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerID="b1c92bc3b8fdede80860622f35449e9319e34a32dd203031bf7605b4be07f8ed" exitCode=2 Feb 16 10:08:36 crc kubenswrapper[4814]: I0216 10:08:36.202791 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09e4bc37-1b9c-447e-93e3-b1278ca4d959","Type":"ContainerDied","Data":"5b30f8005760cb00dcad7c3c0ce513166b8d193c803c08d9e502813e4a2c0ef2"} Feb 16 10:08:36 crc kubenswrapper[4814]: I0216 10:08:36.202837 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09e4bc37-1b9c-447e-93e3-b1278ca4d959","Type":"ContainerDied","Data":"b1c92bc3b8fdede80860622f35449e9319e34a32dd203031bf7605b4be07f8ed"} Feb 16 10:08:36 crc kubenswrapper[4814]: I0216 10:08:36.676914 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:08:37 crc kubenswrapper[4814]: I0216 10:08:37.226654 4814 generic.go:334] "Generic (PLEG): container finished" podID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerID="805ed1d0288a989e457e87e6544f2d93b475252db5bcc019266755f409c4cfbc" exitCode=0 Feb 16 10:08:37 crc kubenswrapper[4814]: I0216 10:08:37.226725 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09e4bc37-1b9c-447e-93e3-b1278ca4d959","Type":"ContainerDied","Data":"805ed1d0288a989e457e87e6544f2d93b475252db5bcc019266755f409c4cfbc"} Feb 16 10:08:37 crc kubenswrapper[4814]: I0216 10:08:37.361318 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f95b74b5b-mpwlg" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.164:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.164:8443: connect: connection refused" Feb 16 10:08:37 crc kubenswrapper[4814]: I0216 10:08:37.677292 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:08:38 crc kubenswrapper[4814]: I0216 10:08:38.198780 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.186:3000/\": dial tcp 10.217.0.186:3000: connect: connection refused" Feb 16 10:08:40 crc kubenswrapper[4814]: I0216 10:08:40.298501 4814 generic.go:334] "Generic (PLEG): container finished" podID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerID="fe9f0292de218db8b924e2937d8a4bec9e47fb1649458f1348d2879e3452fcfa" exitCode=0 Feb 16 10:08:40 crc kubenswrapper[4814]: I0216 10:08:40.298601 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09e4bc37-1b9c-447e-93e3-b1278ca4d959","Type":"ContainerDied","Data":"fe9f0292de218db8b924e2937d8a4bec9e47fb1649458f1348d2879e3452fcfa"} Feb 16 10:08:40 crc kubenswrapper[4814]: I0216 10:08:40.299126 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 10:08:40 crc kubenswrapper[4814]: I0216 10:08:40.299426 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="55140aa6-2437-463c-be2e-0fa6735ee321" containerName="kube-state-metrics" containerID="cri-o://fd8e5ec3b18bccf213bc27a1e20c585795b6685cfc381f7b958ae9d75e245297" gracePeriod=30 Feb 16 10:08:41 crc kubenswrapper[4814]: I0216 10:08:41.314962 4814 generic.go:334] "Generic (PLEG): container finished" podID="55140aa6-2437-463c-be2e-0fa6735ee321" containerID="fd8e5ec3b18bccf213bc27a1e20c585795b6685cfc381f7b958ae9d75e245297" exitCode=2 Feb 16 10:08:41 crc kubenswrapper[4814]: I0216 10:08:41.315030 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"55140aa6-2437-463c-be2e-0fa6735ee321","Type":"ContainerDied","Data":"fd8e5ec3b18bccf213bc27a1e20c585795b6685cfc381f7b958ae9d75e245297"} Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.753812 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.819917 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.836124 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.847407 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.873220 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k4wf\" (UniqueName: \"kubernetes.io/projected/55140aa6-2437-463c-be2e-0fa6735ee321-kube-api-access-5k4wf\") pod \"55140aa6-2437-463c-be2e-0fa6735ee321\" (UID: \"55140aa6-2437-463c-be2e-0fa6735ee321\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.907144 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55140aa6-2437-463c-be2e-0fa6735ee321-kube-api-access-5k4wf" (OuterVolumeSpecName: "kube-api-access-5k4wf") pod "55140aa6-2437-463c-be2e-0fa6735ee321" (UID: "55140aa6-2437-463c-be2e-0fa6735ee321"). InnerVolumeSpecName "kube-api-access-5k4wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975264 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-combined-ca-bundle\") pod \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975324 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxzzt\" (UniqueName: \"kubernetes.io/projected/09e4bc37-1b9c-447e-93e3-b1278ca4d959-kube-api-access-xxzzt\") pod \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975367 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-scripts\") pod \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975476 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-config-data\") pod \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975521 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d59ls\" (UniqueName: \"kubernetes.io/projected/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-kube-api-access-d59ls\") pod \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975593 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-sg-core-conf-yaml\") pod \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975633 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-logs\") pod \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975697 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-tls-certs\") pod \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975728 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-combined-ca-bundle\") pod \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975754 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-secret-key\") pod \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975794 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-scripts\") pod \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975824 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-run-httpd\") pod \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975862 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-config-data\") pod \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\" (UID: \"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.975884 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-log-httpd\") pod \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\" (UID: \"09e4bc37-1b9c-447e-93e3-b1278ca4d959\") " Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.977082 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "09e4bc37-1b9c-447e-93e3-b1278ca4d959" (UID: "09e4bc37-1b9c-447e-93e3-b1278ca4d959"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.977920 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-logs" (OuterVolumeSpecName: "logs") pod "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" (UID: "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.979841 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.979862 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.979873 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k4wf\" (UniqueName: \"kubernetes.io/projected/55140aa6-2437-463c-be2e-0fa6735ee321-kube-api-access-5k4wf\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:42 crc kubenswrapper[4814]: I0216 10:08:42.980573 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "09e4bc37-1b9c-447e-93e3-b1278ca4d959" (UID: "09e4bc37-1b9c-447e-93e3-b1278ca4d959"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.011757 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-scripts" (OuterVolumeSpecName: "scripts") pod "09e4bc37-1b9c-447e-93e3-b1278ca4d959" (UID: "09e4bc37-1b9c-447e-93e3-b1278ca4d959"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.012175 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" (UID: "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.012688 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-kube-api-access-d59ls" (OuterVolumeSpecName: "kube-api-access-d59ls") pod "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" (UID: "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40"). InnerVolumeSpecName "kube-api-access-d59ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.037041 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e4bc37-1b9c-447e-93e3-b1278ca4d959-kube-api-access-xxzzt" (OuterVolumeSpecName: "kube-api-access-xxzzt") pod "09e4bc37-1b9c-447e-93e3-b1278ca4d959" (UID: "09e4bc37-1b9c-447e-93e3-b1278ca4d959"). InnerVolumeSpecName "kube-api-access-xxzzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.041796 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-config-data" (OuterVolumeSpecName: "config-data") pod "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" (UID: "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.071860 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-scripts" (OuterVolumeSpecName: "scripts") pod "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" (UID: "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.153145 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxzzt\" (UniqueName: \"kubernetes.io/projected/09e4bc37-1b9c-447e-93e3-b1278ca4d959-kube-api-access-xxzzt\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.153782 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.153812 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d59ls\" (UniqueName: \"kubernetes.io/projected/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-kube-api-access-d59ls\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.153829 4814 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.153881 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.153891 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.153910 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09e4bc37-1b9c-447e-93e3-b1278ca4d959-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.188117 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "09e4bc37-1b9c-447e-93e3-b1278ca4d959" (UID: "09e4bc37-1b9c-447e-93e3-b1278ca4d959"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.241258 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" (UID: "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.278641 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.278684 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.287799 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-config-data" (OuterVolumeSpecName: "config-data") pod "09e4bc37-1b9c-447e-93e3-b1278ca4d959" (UID: "09e4bc37-1b9c-447e-93e3-b1278ca4d959"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.295122 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" (UID: "1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.305923 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09e4bc37-1b9c-447e-93e3-b1278ca4d959" (UID: "09e4bc37-1b9c-447e-93e3-b1278ca4d959"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.331114 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-bf98696f9-fcvdv"] Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.368208 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zqmw" event={"ID":"7d9f6435-c405-4209-9a92-26f39daf2909","Type":"ContainerStarted","Data":"88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99"} Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.380761 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.380803 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e4bc37-1b9c-447e-93e3-b1278ca4d959-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.380818 4814 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.390095 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f95b74b5b-mpwlg" event={"ID":"1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40","Type":"ContainerDied","Data":"ab213a8ad9dce11553ce6f3920c26820406ad0db5278067f092713707c97a6e3"} Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.390149 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f95b74b5b-mpwlg" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.390164 4814 scope.go:117] "RemoveContainer" containerID="87871cb885e9b3909b68ea04065f4d9407f5cf1aba6d478b7386c5f4876768fd" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.400024 4814 generic.go:334] "Generic (PLEG): container finished" podID="7eb35cac-f0f0-45f9-8f80-63832e254210" containerID="3ef6e6a0f7074704b07ac4baa7e207ec2e0ef30092eb8ba9a5061ebe2406eada" exitCode=137 Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.400084 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7eb35cac-f0f0-45f9-8f80-63832e254210","Type":"ContainerDied","Data":"3ef6e6a0f7074704b07ac4baa7e207ec2e0ef30092eb8ba9a5061ebe2406eada"} Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.408321 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8a5610e4-be60-4c16-9911-e06986025235","Type":"ContainerStarted","Data":"42743084fa83187bac5f7c91a5bb6542b363ee40496f5d0c220627fb2d264ceb"} Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.414482 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7zqmw" podStartSLOduration=2.885157847 podStartE2EDuration="14.414458075s" podCreationTimestamp="2026-02-16 10:08:29 +0000 UTC" firstStartedPulling="2026-02-16 10:08:31.101436249 +0000 UTC m=+1368.794592429" lastFinishedPulling="2026-02-16 10:08:42.630736477 +0000 UTC m=+1380.323892657" observedRunningTime="2026-02-16 10:08:43.392839653 +0000 UTC m=+1381.085995853" watchObservedRunningTime="2026-02-16 10:08:43.414458075 +0000 UTC m=+1381.107614255" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.425151 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09e4bc37-1b9c-447e-93e3-b1278ca4d959","Type":"ContainerDied","Data":"6f95d7f5f6264c9db8212de704e3f89191a6574b4b75ce731dd0266cc4db6599"} Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.425369 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.431819 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"55140aa6-2437-463c-be2e-0fa6735ee321","Type":"ContainerDied","Data":"9651604a7cebb86e78d3be91b02531db2fe24c5754e4b8e0e3f03d647cc5197b"} Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.431946 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.465053 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f95b74b5b-mpwlg"] Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.476061 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f95b74b5b-mpwlg"] Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.490130 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.015664213 podStartE2EDuration="19.490110147s" podCreationTimestamp="2026-02-16 10:08:24 +0000 UTC" firstStartedPulling="2026-02-16 10:08:26.121401838 +0000 UTC m=+1363.814558018" lastFinishedPulling="2026-02-16 10:08:42.595847772 +0000 UTC m=+1380.289003952" observedRunningTime="2026-02-16 10:08:43.488003389 +0000 UTC m=+1381.181159569" watchObservedRunningTime="2026-02-16 10:08:43.490110147 +0000 UTC m=+1381.183266327" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.567370 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.603096 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.633628 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.653464 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.669052 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:43 crc kubenswrapper[4814]: E0216 10:08:43.669894 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="ceilometer-notification-agent" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.669923 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="ceilometer-notification-agent" Feb 16 10:08:43 crc kubenswrapper[4814]: E0216 10:08:43.669945 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon-log" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.669955 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon-log" Feb 16 10:08:43 crc kubenswrapper[4814]: E0216 10:08:43.669972 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="sg-core" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.669981 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="sg-core" Feb 16 10:08:43 crc kubenswrapper[4814]: E0216 10:08:43.669994 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55140aa6-2437-463c-be2e-0fa6735ee321" containerName="kube-state-metrics" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.670001 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="55140aa6-2437-463c-be2e-0fa6735ee321" containerName="kube-state-metrics" Feb 16 10:08:43 crc kubenswrapper[4814]: E0216 10:08:43.670027 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="ceilometer-central-agent" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.670035 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="ceilometer-central-agent" Feb 16 10:08:43 crc kubenswrapper[4814]: E0216 10:08:43.670050 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="proxy-httpd" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.670059 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="proxy-httpd" Feb 16 10:08:43 crc kubenswrapper[4814]: E0216 10:08:43.670081 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.670089 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.670301 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon-log" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.670316 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="sg-core" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.670328 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="55140aa6-2437-463c-be2e-0fa6735ee321" containerName="kube-state-metrics" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.670342 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="ceilometer-central-agent" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.670355 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="ceilometer-notification-agent" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.670371 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" containerName="horizon" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.670380 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" containerName="proxy-httpd" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.672848 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.676203 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-lqrb5" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.683587 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.684089 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.686327 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.686493 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.689239 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.691638 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.699095 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.703437 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.721310 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.734957 4814 scope.go:117] "RemoveContainer" containerID="a3b7fbb9342bb2d0a8a281033cd00be9db237d5e9833458a68dc92e72f9e66ca" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.792017 4814 scope.go:117] "RemoveContainer" containerID="5b30f8005760cb00dcad7c3c0ce513166b8d193c803c08d9e502813e4a2c0ef2" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.814893 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-log-httpd\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.814944 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.814997 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a434fb2d-63b3-42cb-b686-b56870891b2c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.815077 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-scripts\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.815113 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.815173 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-run-httpd\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.815368 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfg9x\" (UniqueName: \"kubernetes.io/projected/a434fb2d-63b3-42cb-b686-b56870891b2c-kube-api-access-nfg9x\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.815398 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.815423 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a434fb2d-63b3-42cb-b686-b56870891b2c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.815440 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a434fb2d-63b3-42cb-b686-b56870891b2c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.815462 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clx8c\" (UniqueName: \"kubernetes.io/projected/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-kube-api-access-clx8c\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.815497 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-config-data\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.818737 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.855270 4814 scope.go:117] "RemoveContainer" containerID="b1c92bc3b8fdede80860622f35449e9319e34a32dd203031bf7605b4be07f8ed" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.936039 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x62sg\" (UniqueName: \"kubernetes.io/projected/7eb35cac-f0f0-45f9-8f80-63832e254210-kube-api-access-x62sg\") pod \"7eb35cac-f0f0-45f9-8f80-63832e254210\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.936089 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data-custom\") pod \"7eb35cac-f0f0-45f9-8f80-63832e254210\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.936135 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-scripts\") pod \"7eb35cac-f0f0-45f9-8f80-63832e254210\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.936296 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-combined-ca-bundle\") pod \"7eb35cac-f0f0-45f9-8f80-63832e254210\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.936359 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7eb35cac-f0f0-45f9-8f80-63832e254210-etc-machine-id\") pod \"7eb35cac-f0f0-45f9-8f80-63832e254210\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.936426 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data\") pod \"7eb35cac-f0f0-45f9-8f80-63832e254210\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.936450 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eb35cac-f0f0-45f9-8f80-63832e254210-logs\") pod \"7eb35cac-f0f0-45f9-8f80-63832e254210\" (UID: \"7eb35cac-f0f0-45f9-8f80-63832e254210\") " Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937176 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfg9x\" (UniqueName: \"kubernetes.io/projected/a434fb2d-63b3-42cb-b686-b56870891b2c-kube-api-access-nfg9x\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937213 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937250 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a434fb2d-63b3-42cb-b686-b56870891b2c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937276 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a434fb2d-63b3-42cb-b686-b56870891b2c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937303 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clx8c\" (UniqueName: \"kubernetes.io/projected/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-kube-api-access-clx8c\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937365 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-config-data\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937420 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-log-httpd\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937447 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937624 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a434fb2d-63b3-42cb-b686-b56870891b2c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937685 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-scripts\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937761 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.937792 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-run-httpd\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.938314 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-run-httpd\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.939099 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb35cac-f0f0-45f9-8f80-63832e254210-logs" (OuterVolumeSpecName: "logs") pod "7eb35cac-f0f0-45f9-8f80-63832e254210" (UID: "7eb35cac-f0f0-45f9-8f80-63832e254210"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.947331 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.952948 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-log-httpd\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.957373 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a434fb2d-63b3-42cb-b686-b56870891b2c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.962005 4814 scope.go:117] "RemoveContainer" containerID="fe9f0292de218db8b924e2937d8a4bec9e47fb1649458f1348d2879e3452fcfa" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.971565 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.971996 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-scripts" (OuterVolumeSpecName: "scripts") pod "7eb35cac-f0f0-45f9-8f80-63832e254210" (UID: "7eb35cac-f0f0-45f9-8f80-63832e254210"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.975411 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7eb35cac-f0f0-45f9-8f80-63832e254210" (UID: "7eb35cac-f0f0-45f9-8f80-63832e254210"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.975848 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eb35cac-f0f0-45f9-8f80-63832e254210-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7eb35cac-f0f0-45f9-8f80-63832e254210" (UID: "7eb35cac-f0f0-45f9-8f80-63832e254210"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.976187 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfg9x\" (UniqueName: \"kubernetes.io/projected/a434fb2d-63b3-42cb-b686-b56870891b2c-kube-api-access-nfg9x\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.979290 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a434fb2d-63b3-42cb-b686-b56870891b2c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.985589 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-scripts\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:43 crc kubenswrapper[4814]: I0216 10:08:43.986410 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a434fb2d-63b3-42cb-b686-b56870891b2c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a434fb2d-63b3-42cb-b686-b56870891b2c\") " pod="openstack/kube-state-metrics-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.002384 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clx8c\" (UniqueName: \"kubernetes.io/projected/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-kube-api-access-clx8c\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.008589 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb35cac-f0f0-45f9-8f80-63832e254210-kube-api-access-x62sg" (OuterVolumeSpecName: "kube-api-access-x62sg") pod "7eb35cac-f0f0-45f9-8f80-63832e254210" (UID: "7eb35cac-f0f0-45f9-8f80-63832e254210"). InnerVolumeSpecName "kube-api-access-x62sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.018200 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-config-data\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.020872 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " pod="openstack/ceilometer-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.030614 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7eb35cac-f0f0-45f9-8f80-63832e254210" (UID: "7eb35cac-f0f0-45f9-8f80-63832e254210"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.041158 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.041219 4814 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7eb35cac-f0f0-45f9-8f80-63832e254210-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.041234 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eb35cac-f0f0-45f9-8f80-63832e254210-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.041247 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x62sg\" (UniqueName: \"kubernetes.io/projected/7eb35cac-f0f0-45f9-8f80-63832e254210-kube-api-access-x62sg\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.041262 4814 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.041276 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.046954 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.067286 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.082519 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data" (OuterVolumeSpecName: "config-data") pod "7eb35cac-f0f0-45f9-8f80-63832e254210" (UID: "7eb35cac-f0f0-45f9-8f80-63832e254210"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.143094 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb35cac-f0f0-45f9-8f80-63832e254210-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.288988 4814 scope.go:117] "RemoveContainer" containerID="805ed1d0288a989e457e87e6544f2d93b475252db5bcc019266755f409c4cfbc" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.346624 4814 scope.go:117] "RemoveContainer" containerID="fd8e5ec3b18bccf213bc27a1e20c585795b6685cfc381f7b958ae9d75e245297" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.483116 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7eb35cac-f0f0-45f9-8f80-63832e254210","Type":"ContainerDied","Data":"b0b08f0da6884f381032a2a68e71b6bd3d3c38f863f498c7469e88a8c2101f88"} Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.483590 4814 scope.go:117] "RemoveContainer" containerID="3ef6e6a0f7074704b07ac4baa7e207ec2e0ef30092eb8ba9a5061ebe2406eada" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.483855 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.524107 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bf98696f9-fcvdv" event={"ID":"2818c738-cd93-486f-8b95-3e0c60ec8b59","Type":"ContainerStarted","Data":"7020a6dafb3ae7cf6ea032a91963b5f1a8217709fc3cdc583cf6d995f7d1f8c1"} Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.524455 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bf98696f9-fcvdv" event={"ID":"2818c738-cd93-486f-8b95-3e0c60ec8b59","Type":"ContainerStarted","Data":"77d521aa8afeec3fc552751315ff527284ac0b34cefe4ce966acbb0a0694a03d"} Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.538119 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"4040b35c8b4cadff90134b35f7dd8b6c0317d2fb465c9b70ad43b643b800ac05"} Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.587347 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.600960 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.657610 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 10:08:44 crc kubenswrapper[4814]: E0216 10:08:44.662215 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb35cac-f0f0-45f9-8f80-63832e254210" containerName="cinder-api-log" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.662241 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb35cac-f0f0-45f9-8f80-63832e254210" containerName="cinder-api-log" Feb 16 10:08:44 crc kubenswrapper[4814]: E0216 10:08:44.662333 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb35cac-f0f0-45f9-8f80-63832e254210" containerName="cinder-api" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.662345 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb35cac-f0f0-45f9-8f80-63832e254210" containerName="cinder-api" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.662779 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb35cac-f0f0-45f9-8f80-63832e254210" containerName="cinder-api-log" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.662803 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb35cac-f0f0-45f9-8f80-63832e254210" containerName="cinder-api" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.668572 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.674889 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.675417 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.689553 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.695728 4814 scope.go:117] "RemoveContainer" containerID="ca8c67105dd1aa268f6da838fca8ba9e0800a0c12140e73f9091f74e6cb18d05" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.714925 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.733252 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:44 crc kubenswrapper[4814]: W0216 10:08:44.739507 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1077fff9_a5cf_4c63_a56e_5ac9f1705d6e.slice/crio-2bd740a36fb4fdf2493d060d09bb3f2019a821d361f918b49d061dc408a12002 WatchSource:0}: Error finding container 2bd740a36fb4fdf2493d060d09bb3f2019a821d361f918b49d061dc408a12002: Status 404 returned error can't find the container with id 2bd740a36fb4fdf2493d060d09bb3f2019a821d361f918b49d061dc408a12002 Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.796994 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-config-data-custom\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.797347 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.797490 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c2e92d0-a064-4611-9539-5dd4a4490eee-logs\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.797570 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.797642 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-config-data\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.797771 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c2e92d0-a064-4611-9539-5dd4a4490eee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.797818 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.797844 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84nkh\" (UniqueName: \"kubernetes.io/projected/5c2e92d0-a064-4611-9539-5dd4a4490eee-kube-api-access-84nkh\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.797864 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-scripts\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.799966 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.901746 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c2e92d0-a064-4611-9539-5dd4a4490eee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.902165 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.902219 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84nkh\" (UniqueName: \"kubernetes.io/projected/5c2e92d0-a064-4611-9539-5dd4a4490eee-kube-api-access-84nkh\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.902249 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-scripts\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.902451 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-config-data-custom\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.902521 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.902584 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c2e92d0-a064-4611-9539-5dd4a4490eee-logs\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.902610 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.902642 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-config-data\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.901963 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5c2e92d0-a064-4611-9539-5dd4a4490eee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.909222 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-scripts\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.909892 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c2e92d0-a064-4611-9539-5dd4a4490eee-logs\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.909938 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-config-data\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.910742 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.911579 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.912380 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-config-data-custom\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.912863 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c2e92d0-a064-4611-9539-5dd4a4490eee-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:44 crc kubenswrapper[4814]: I0216 10:08:44.928380 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84nkh\" (UniqueName: \"kubernetes.io/projected/5c2e92d0-a064-4611-9539-5dd4a4490eee-kube-api-access-84nkh\") pod \"cinder-api-0\" (UID: \"5c2e92d0-a064-4611-9539-5dd4a4490eee\") " pod="openstack/cinder-api-0" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.023795 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.049748 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e4bc37-1b9c-447e-93e3-b1278ca4d959" path="/var/lib/kubelet/pods/09e4bc37-1b9c-447e-93e3-b1278ca4d959/volumes" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.051016 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40" path="/var/lib/kubelet/pods/1ff1c1c3-2b57-4e33-a1ed-ca7ac7a63f40/volumes" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.051614 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55140aa6-2437-463c-be2e-0fa6735ee321" path="/var/lib/kubelet/pods/55140aa6-2437-463c-be2e-0fa6735ee321/volumes" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.056358 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb35cac-f0f0-45f9-8f80-63832e254210" path="/var/lib/kubelet/pods/7eb35cac-f0f0-45f9-8f80-63832e254210/volumes" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.504127 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.505281 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.507700 4814 scope.go:117] "RemoveContainer" containerID="58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.577150 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-bf98696f9-fcvdv" event={"ID":"2818c738-cd93-486f-8b95-3e0c60ec8b59","Type":"ContainerStarted","Data":"2f70c1700a52f11a46be496b5a3c544316c2bb7546c311cfcad829c818983652"} Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.577294 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.577352 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.586226 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e","Type":"ContainerStarted","Data":"17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e"} Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.586283 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e","Type":"ContainerStarted","Data":"2bd740a36fb4fdf2493d060d09bb3f2019a821d361f918b49d061dc408a12002"} Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.588995 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a434fb2d-63b3-42cb-b686-b56870891b2c","Type":"ContainerStarted","Data":"042301d0f28166382a273d5656930f2178d127857c0b857af7847b4a45059478"} Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.591036 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a434fb2d-63b3-42cb-b686-b56870891b2c","Type":"ContainerStarted","Data":"7ccc2e6845697f75b621ac94160fc22d492b70e4582b2b55c71109c5d9cd392f"} Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.656468 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.205727889 podStartE2EDuration="2.656435739s" podCreationTimestamp="2026-02-16 10:08:43 +0000 UTC" firstStartedPulling="2026-02-16 10:08:44.812269667 +0000 UTC m=+1382.505425847" lastFinishedPulling="2026-02-16 10:08:45.262977517 +0000 UTC m=+1382.956133697" observedRunningTime="2026-02-16 10:08:45.650022264 +0000 UTC m=+1383.343178444" watchObservedRunningTime="2026-02-16 10:08:45.656435739 +0000 UTC m=+1383.349591919" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.661878 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-bf98696f9-fcvdv" podStartSLOduration=12.661866838 podStartE2EDuration="12.661866838s" podCreationTimestamp="2026-02-16 10:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:45.619621431 +0000 UTC m=+1383.312777611" watchObservedRunningTime="2026-02-16 10:08:45.661866838 +0000 UTC m=+1383.355023018" Feb 16 10:08:45 crc kubenswrapper[4814]: I0216 10:08:45.722251 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 10:08:46 crc kubenswrapper[4814]: I0216 10:08:46.476158 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-655ddb8b77-xt84d" Feb 16 10:08:46 crc kubenswrapper[4814]: I0216 10:08:46.687228 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"88895e94-c6c9-4622-b6eb-94982898ac2b","Type":"ContainerStarted","Data":"66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12"} Feb 16 10:08:46 crc kubenswrapper[4814]: I0216 10:08:46.721336 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-644c587556-hkrfd"] Feb 16 10:08:46 crc kubenswrapper[4814]: I0216 10:08:46.721701 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-644c587556-hkrfd" podUID="4fc5e898-ae9e-40d2-9e50-9c2acc67b824" containerName="neutron-api" containerID="cri-o://de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123" gracePeriod=30 Feb 16 10:08:46 crc kubenswrapper[4814]: I0216 10:08:46.721880 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-644c587556-hkrfd" podUID="4fc5e898-ae9e-40d2-9e50-9c2acc67b824" containerName="neutron-httpd" containerID="cri-o://9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185" gracePeriod=30 Feb 16 10:08:46 crc kubenswrapper[4814]: I0216 10:08:46.742478 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e","Type":"ContainerStarted","Data":"bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e"} Feb 16 10:08:46 crc kubenswrapper[4814]: I0216 10:08:46.780770 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5c2e92d0-a064-4611-9539-5dd4a4490eee","Type":"ContainerStarted","Data":"7becc008c1d03ef0e8998b5154550f9a5f28f330c5a228e49d2e5ed33282a21c"} Feb 16 10:08:46 crc kubenswrapper[4814]: I0216 10:08:46.781265 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 10:08:47 crc kubenswrapper[4814]: I0216 10:08:47.685782 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:08:47 crc kubenswrapper[4814]: I0216 10:08:47.873960 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5c2e92d0-a064-4611-9539-5dd4a4490eee","Type":"ContainerStarted","Data":"fe5a0aa18d0f9a31eff67f5b604d62cb67915d771fcf42d5e0a27825ed238b78"} Feb 16 10:08:47 crc kubenswrapper[4814]: I0216 10:08:47.953680 4814 generic.go:334] "Generic (PLEG): container finished" podID="4fc5e898-ae9e-40d2-9e50-9c2acc67b824" containerID="9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185" exitCode=0 Feb 16 10:08:47 crc kubenswrapper[4814]: I0216 10:08:47.953987 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644c587556-hkrfd" event={"ID":"4fc5e898-ae9e-40d2-9e50-9c2acc67b824","Type":"ContainerDied","Data":"9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185"} Feb 16 10:08:48 crc kubenswrapper[4814]: I0216 10:08:48.002676 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e","Type":"ContainerStarted","Data":"665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a"} Feb 16 10:08:49 crc kubenswrapper[4814]: I0216 10:08:49.015729 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e","Type":"ContainerStarted","Data":"2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c"} Feb 16 10:08:49 crc kubenswrapper[4814]: I0216 10:08:49.016595 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 10:08:49 crc kubenswrapper[4814]: I0216 10:08:49.021772 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5c2e92d0-a064-4611-9539-5dd4a4490eee","Type":"ContainerStarted","Data":"fb26f4dcc05d4642c2c102cc0f6e5b037a392562528805c8f4fc17c581ba9f28"} Feb 16 10:08:49 crc kubenswrapper[4814]: I0216 10:08:49.021840 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 10:08:49 crc kubenswrapper[4814]: I0216 10:08:49.055488 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.191432028 podStartE2EDuration="6.055461423s" podCreationTimestamp="2026-02-16 10:08:43 +0000 UTC" firstStartedPulling="2026-02-16 10:08:44.768730264 +0000 UTC m=+1382.461886444" lastFinishedPulling="2026-02-16 10:08:48.632759659 +0000 UTC m=+1386.325915839" observedRunningTime="2026-02-16 10:08:49.048351219 +0000 UTC m=+1386.741507399" watchObservedRunningTime="2026-02-16 10:08:49.055461423 +0000 UTC m=+1386.748617603" Feb 16 10:08:49 crc kubenswrapper[4814]: I0216 10:08:49.132801 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.13277545 podStartE2EDuration="5.13277545s" podCreationTimestamp="2026-02-16 10:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:49.101162105 +0000 UTC m=+1386.794318295" watchObservedRunningTime="2026-02-16 10:08:49.13277545 +0000 UTC m=+1386.825931620" Feb 16 10:08:50 crc kubenswrapper[4814]: I0216 10:08:50.060272 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"4040b35c8b4cadff90134b35f7dd8b6c0317d2fb465c9b70ad43b643b800ac05"} Feb 16 10:08:50 crc kubenswrapper[4814]: I0216 10:08:50.060885 4814 scope.go:117] "RemoveContainer" containerID="c8fe78619ae6dc22f80270563a7f862d207cdf27392b7272fd1b1de5bd079eec" Feb 16 10:08:50 crc kubenswrapper[4814]: I0216 10:08:50.061558 4814 scope.go:117] "RemoveContainer" containerID="4040b35c8b4cadff90134b35f7dd8b6c0317d2fb465c9b70ad43b643b800ac05" Feb 16 10:08:50 crc kubenswrapper[4814]: E0216 10:08:50.061885 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:08:50 crc kubenswrapper[4814]: I0216 10:08:50.060217 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="4040b35c8b4cadff90134b35f7dd8b6c0317d2fb465c9b70ad43b643b800ac05" exitCode=0 Feb 16 10:08:50 crc kubenswrapper[4814]: I0216 10:08:50.189933 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:50 crc kubenswrapper[4814]: I0216 10:08:50.190021 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:08:51 crc kubenswrapper[4814]: I0216 10:08:51.089813 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:51 crc kubenswrapper[4814]: I0216 10:08:51.090622 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="ceilometer-central-agent" containerID="cri-o://17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e" gracePeriod=30 Feb 16 10:08:51 crc kubenswrapper[4814]: I0216 10:08:51.090934 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="proxy-httpd" containerID="cri-o://2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c" gracePeriod=30 Feb 16 10:08:51 crc kubenswrapper[4814]: I0216 10:08:51.091042 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="ceilometer-notification-agent" containerID="cri-o://bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e" gracePeriod=30 Feb 16 10:08:51 crc kubenswrapper[4814]: I0216 10:08:51.091096 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="sg-core" containerID="cri-o://665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a" gracePeriod=30 Feb 16 10:08:51 crc kubenswrapper[4814]: I0216 10:08:51.247672 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7zqmw" podUID="7d9f6435-c405-4209-9a92-26f39daf2909" containerName="registry-server" probeResult="failure" output=< Feb 16 10:08:51 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 10:08:51 crc kubenswrapper[4814]: > Feb 16 10:08:51 crc kubenswrapper[4814]: I0216 10:08:51.676792 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:08:51 crc kubenswrapper[4814]: I0216 10:08:51.679606 4814 scope.go:117] "RemoveContainer" containerID="4040b35c8b4cadff90134b35f7dd8b6c0317d2fb465c9b70ad43b643b800ac05" Feb 16 10:08:51 crc kubenswrapper[4814]: E0216 10:08:51.680394 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:08:51 crc kubenswrapper[4814]: I0216 10:08:51.976153 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.103287 4814 generic.go:334] "Generic (PLEG): container finished" podID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerID="2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c" exitCode=0 Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.103330 4814 generic.go:334] "Generic (PLEG): container finished" podID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerID="665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a" exitCode=2 Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.103339 4814 generic.go:334] "Generic (PLEG): container finished" podID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerID="bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e" exitCode=0 Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.103378 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e","Type":"ContainerDied","Data":"2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c"} Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.103449 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e","Type":"ContainerDied","Data":"665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a"} Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.103462 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e","Type":"ContainerDied","Data":"bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e"} Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.107324 4814 generic.go:334] "Generic (PLEG): container finished" podID="4fc5e898-ae9e-40d2-9e50-9c2acc67b824" containerID="de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123" exitCode=0 Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.107372 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-644c587556-hkrfd" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.107392 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644c587556-hkrfd" event={"ID":"4fc5e898-ae9e-40d2-9e50-9c2acc67b824","Type":"ContainerDied","Data":"de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123"} Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.107431 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-644c587556-hkrfd" event={"ID":"4fc5e898-ae9e-40d2-9e50-9c2acc67b824","Type":"ContainerDied","Data":"554524c1434208bdaa3808c0837972728b664a5f145a90ccf49f1b60c60608f8"} Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.107451 4814 scope.go:117] "RemoveContainer" containerID="9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.122504 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmx5s\" (UniqueName: \"kubernetes.io/projected/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-kube-api-access-kmx5s\") pod \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.122696 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-combined-ca-bundle\") pod \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.122745 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-httpd-config\") pod \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.122781 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-ovndb-tls-certs\") pod \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.123088 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-config\") pod \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\" (UID: \"4fc5e898-ae9e-40d2-9e50-9c2acc67b824\") " Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.132456 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4fc5e898-ae9e-40d2-9e50-9c2acc67b824" (UID: "4fc5e898-ae9e-40d2-9e50-9c2acc67b824"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.132482 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-kube-api-access-kmx5s" (OuterVolumeSpecName: "kube-api-access-kmx5s") pod "4fc5e898-ae9e-40d2-9e50-9c2acc67b824" (UID: "4fc5e898-ae9e-40d2-9e50-9c2acc67b824"). InnerVolumeSpecName "kube-api-access-kmx5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.142609 4814 scope.go:117] "RemoveContainer" containerID="de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.173756 4814 scope.go:117] "RemoveContainer" containerID="9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185" Feb 16 10:08:52 crc kubenswrapper[4814]: E0216 10:08:52.174606 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185\": container with ID starting with 9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185 not found: ID does not exist" containerID="9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.174667 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185"} err="failed to get container status \"9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185\": rpc error: code = NotFound desc = could not find container \"9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185\": container with ID starting with 9ed8f0b8180958d144664e1f846717935e7c742d247057bbf9248cd6c3eaa185 not found: ID does not exist" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.174731 4814 scope.go:117] "RemoveContainer" containerID="de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123" Feb 16 10:08:52 crc kubenswrapper[4814]: E0216 10:08:52.175281 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123\": container with ID starting with de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123 not found: ID does not exist" containerID="de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.175314 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123"} err="failed to get container status \"de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123\": rpc error: code = NotFound desc = could not find container \"de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123\": container with ID starting with de7c83e0e93a829a524fe0dfa81656935c8992e1995020f606bf4c49388c2123 not found: ID does not exist" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.196977 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fc5e898-ae9e-40d2-9e50-9c2acc67b824" (UID: "4fc5e898-ae9e-40d2-9e50-9c2acc67b824"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.222614 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-config" (OuterVolumeSpecName: "config") pod "4fc5e898-ae9e-40d2-9e50-9c2acc67b824" (UID: "4fc5e898-ae9e-40d2-9e50-9c2acc67b824"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.225148 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4fc5e898-ae9e-40d2-9e50-9c2acc67b824" (UID: "4fc5e898-ae9e-40d2-9e50-9c2acc67b824"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.230848 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.230871 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmx5s\" (UniqueName: \"kubernetes.io/projected/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-kube-api-access-kmx5s\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.230885 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.230894 4814 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.230904 4814 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fc5e898-ae9e-40d2-9e50-9c2acc67b824-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.487669 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-644c587556-hkrfd"] Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.499662 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-644c587556-hkrfd"] Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.586915 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.587277 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" containerName="glance-log" containerID="cri-o://112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08" gracePeriod=30 Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.587362 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" containerName="glance-httpd" containerID="cri-o://f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6" gracePeriod=30 Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.677455 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:08:52 crc kubenswrapper[4814]: I0216 10:08:52.678928 4814 scope.go:117] "RemoveContainer" containerID="4040b35c8b4cadff90134b35f7dd8b6c0317d2fb465c9b70ad43b643b800ac05" Feb 16 10:08:52 crc kubenswrapper[4814]: E0216 10:08:52.679487 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:08:53 crc kubenswrapper[4814]: I0216 10:08:53.007490 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fc5e898-ae9e-40d2-9e50-9c2acc67b824" path="/var/lib/kubelet/pods/4fc5e898-ae9e-40d2-9e50-9c2acc67b824/volumes" Feb 16 10:08:53 crc kubenswrapper[4814]: I0216 10:08:53.124231 4814 generic.go:334] "Generic (PLEG): container finished" podID="e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" containerID="112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08" exitCode=143 Feb 16 10:08:53 crc kubenswrapper[4814]: I0216 10:08:53.124294 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23","Type":"ContainerDied","Data":"112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08"} Feb 16 10:08:53 crc kubenswrapper[4814]: I0216 10:08:53.898354 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:53 crc kubenswrapper[4814]: I0216 10:08:53.899438 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-bf98696f9-fcvdv" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.064115 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.106161 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.168800 4814 generic.go:334] "Generic (PLEG): container finished" podID="e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" containerID="f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6" exitCode=0 Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.168926 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.168997 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23","Type":"ContainerDied","Data":"f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6"} Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.169050 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23","Type":"ContainerDied","Data":"9812a6c36951790f04b24ce685fb51f0ff264aca3b233daffaa51329ac5a37e7"} Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.169077 4814 scope.go:117] "RemoveContainer" containerID="f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.193069 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.193339 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-combined-ca-bundle\") pod \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.193396 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-public-tls-certs\") pod \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.193465 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-httpd-run\") pod \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.193570 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-config-data\") pod \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.193616 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-scripts\") pod \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.193692 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-logs\") pod \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.193782 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bcqn\" (UniqueName: \"kubernetes.io/projected/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-kube-api-access-8bcqn\") pod \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\" (UID: \"e6af939f-d1dd-44b1-b0c0-a52f27bf6f23\") " Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.195481 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-logs" (OuterVolumeSpecName: "logs") pod "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" (UID: "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.199363 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" (UID: "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.225244 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-kube-api-access-8bcqn" (OuterVolumeSpecName: "kube-api-access-8bcqn") pod "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" (UID: "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23"). InnerVolumeSpecName "kube-api-access-8bcqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.226177 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" (UID: "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.232573 4814 scope.go:117] "RemoveContainer" containerID="112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.237288 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-scripts" (OuterVolumeSpecName: "scripts") pod "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" (UID: "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.277918 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" (UID: "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.293626 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-config-data" (OuterVolumeSpecName: "config-data") pod "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" (UID: "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.297502 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.297611 4814 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.297624 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.297632 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.297642 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.297652 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bcqn\" (UniqueName: \"kubernetes.io/projected/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-kube-api-access-8bcqn\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.297677 4814 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.330725 4814 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.339913 4814 scope.go:117] "RemoveContainer" containerID="f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6" Feb 16 10:08:54 crc kubenswrapper[4814]: E0216 10:08:54.344685 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6\": container with ID starting with f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6 not found: ID does not exist" containerID="f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.344728 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6"} err="failed to get container status \"f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6\": rpc error: code = NotFound desc = could not find container \"f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6\": container with ID starting with f7e8e36657a805a89d6adf79fc66bfdfe34034fc682216c0af0375ab2c3410b6 not found: ID does not exist" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.344761 4814 scope.go:117] "RemoveContainer" containerID="112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08" Feb 16 10:08:54 crc kubenswrapper[4814]: E0216 10:08:54.346854 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08\": container with ID starting with 112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08 not found: ID does not exist" containerID="112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.346881 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08"} err="failed to get container status \"112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08\": rpc error: code = NotFound desc = could not find container \"112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08\": container with ID starting with 112572e8499f957be6a14083ca95ee4312ad15861b7371985198654b68d20b08 not found: ID does not exist" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.364505 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" (UID: "e6af939f-d1dd-44b1-b0c0-a52f27bf6f23"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.399623 4814 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.399672 4814 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.587815 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.608188 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.636177 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:08:54 crc kubenswrapper[4814]: E0216 10:08:54.636807 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc5e898-ae9e-40d2-9e50-9c2acc67b824" containerName="neutron-api" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.636832 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc5e898-ae9e-40d2-9e50-9c2acc67b824" containerName="neutron-api" Feb 16 10:08:54 crc kubenswrapper[4814]: E0216 10:08:54.636848 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fc5e898-ae9e-40d2-9e50-9c2acc67b824" containerName="neutron-httpd" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.636856 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fc5e898-ae9e-40d2-9e50-9c2acc67b824" containerName="neutron-httpd" Feb 16 10:08:54 crc kubenswrapper[4814]: E0216 10:08:54.636879 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" containerName="glance-log" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.636886 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" containerName="glance-log" Feb 16 10:08:54 crc kubenswrapper[4814]: E0216 10:08:54.636907 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" containerName="glance-httpd" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.636913 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" containerName="glance-httpd" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.638140 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc5e898-ae9e-40d2-9e50-9c2acc67b824" containerName="neutron-httpd" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.638168 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" containerName="glance-httpd" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.638206 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fc5e898-ae9e-40d2-9e50-9c2acc67b824" containerName="neutron-api" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.638223 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" containerName="glance-log" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.640851 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.645174 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.645652 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.654241 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.825073 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77a0d3ee-2bcb-4733-89a8-b4525fc20768-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.825143 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a0d3ee-2bcb-4733-89a8-b4525fc20768-logs\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.825292 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-config-data\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.825329 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbbc9\" (UniqueName: \"kubernetes.io/projected/77a0d3ee-2bcb-4733-89a8-b4525fc20768-kube-api-access-fbbc9\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.825360 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-scripts\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.825420 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.825451 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.825514 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.895017 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.928435 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77a0d3ee-2bcb-4733-89a8-b4525fc20768-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.928501 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a0d3ee-2bcb-4733-89a8-b4525fc20768-logs\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.928671 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-config-data\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.928727 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbbc9\" (UniqueName: \"kubernetes.io/projected/77a0d3ee-2bcb-4733-89a8-b4525fc20768-kube-api-access-fbbc9\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.928768 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-scripts\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.928861 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.928893 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.928971 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.929315 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77a0d3ee-2bcb-4733-89a8-b4525fc20768-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.930165 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a0d3ee-2bcb-4733-89a8-b4525fc20768-logs\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.930822 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.936993 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-config-data\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.939199 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.946192 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.954193 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77a0d3ee-2bcb-4733-89a8-b4525fc20768-scripts\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:54 crc kubenswrapper[4814]: I0216 10:08:54.958479 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbbc9\" (UniqueName: \"kubernetes.io/projected/77a0d3ee-2bcb-4733-89a8-b4525fc20768-kube-api-access-fbbc9\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.026826 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77a0d3ee-2bcb-4733-89a8-b4525fc20768\") " pod="openstack/glance-default-external-api-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.041244 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-ceilometer-tls-certs\") pod \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.041313 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-combined-ca-bundle\") pod \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.041432 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-config-data\") pod \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.041484 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-sg-core-conf-yaml\") pod \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.041550 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clx8c\" (UniqueName: \"kubernetes.io/projected/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-kube-api-access-clx8c\") pod \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.041598 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-log-httpd\") pod \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.041659 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-scripts\") pod \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.041760 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-run-httpd\") pod \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\" (UID: \"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e\") " Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.045070 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" (UID: "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.045094 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" (UID: "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.048298 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6af939f-d1dd-44b1-b0c0-a52f27bf6f23" path="/var/lib/kubelet/pods/e6af939f-d1dd-44b1-b0c0-a52f27bf6f23/volumes" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.057637 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.057679 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.067271 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-kube-api-access-clx8c" (OuterVolumeSpecName: "kube-api-access-clx8c") pod "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" (UID: "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e"). InnerVolumeSpecName "kube-api-access-clx8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.081047 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-scripts" (OuterVolumeSpecName: "scripts") pod "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" (UID: "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.109012 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" (UID: "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.117669 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" (UID: "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.159465 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.159511 4814 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.159530 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.159559 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clx8c\" (UniqueName: \"kubernetes.io/projected/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-kube-api-access-clx8c\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.178807 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" (UID: "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.186101 4814 generic.go:334] "Generic (PLEG): container finished" podID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerID="17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e" exitCode=0 Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.186193 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e","Type":"ContainerDied","Data":"17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e"} Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.186233 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1077fff9-a5cf-4c63-a56e-5ac9f1705d6e","Type":"ContainerDied","Data":"2bd740a36fb4fdf2493d060d09bb3f2019a821d361f918b49d061dc408a12002"} Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.186255 4814 scope.go:117] "RemoveContainer" containerID="2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.186447 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.228489 4814 scope.go:117] "RemoveContainer" containerID="665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.232795 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-config-data" (OuterVolumeSpecName: "config-data") pod "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" (UID: "1077fff9-a5cf-4c63-a56e-5ac9f1705d6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.262351 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.262897 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.263594 4814 scope.go:117] "RemoveContainer" containerID="bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.266482 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.308924 4814 scope.go:117] "RemoveContainer" containerID="17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.333250 4814 scope.go:117] "RemoveContainer" containerID="2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c" Feb 16 10:08:55 crc kubenswrapper[4814]: E0216 10:08:55.333873 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c\": container with ID starting with 2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c not found: ID does not exist" containerID="2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.333932 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c"} err="failed to get container status \"2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c\": rpc error: code = NotFound desc = could not find container \"2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c\": container with ID starting with 2835ef4b4964c1922753433e1cf562a698bd66824f89d3247c55d9cdd613807c not found: ID does not exist" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.333970 4814 scope.go:117] "RemoveContainer" containerID="665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a" Feb 16 10:08:55 crc kubenswrapper[4814]: E0216 10:08:55.334399 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a\": container with ID starting with 665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a not found: ID does not exist" containerID="665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.334448 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a"} err="failed to get container status \"665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a\": rpc error: code = NotFound desc = could not find container \"665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a\": container with ID starting with 665a605120b8b18e5345d36a4618a7d4faeb16db2617089d9b17808a15004b8a not found: ID does not exist" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.334480 4814 scope.go:117] "RemoveContainer" containerID="bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e" Feb 16 10:08:55 crc kubenswrapper[4814]: E0216 10:08:55.334766 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e\": container with ID starting with bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e not found: ID does not exist" containerID="bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.334803 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e"} err="failed to get container status \"bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e\": rpc error: code = NotFound desc = could not find container \"bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e\": container with ID starting with bedcfa7a33be63d171a05c29823baa21fec60c87709b9fc937d7d949d09c697e not found: ID does not exist" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.334823 4814 scope.go:117] "RemoveContainer" containerID="17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e" Feb 16 10:08:55 crc kubenswrapper[4814]: E0216 10:08:55.335207 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e\": container with ID starting with 17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e not found: ID does not exist" containerID="17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.335234 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e"} err="failed to get container status \"17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e\": rpc error: code = NotFound desc = could not find container \"17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e\": container with ID starting with 17007f059a98ca46a6a2e90547b1f3115300167ef5225400f20934bc7d69063e not found: ID does not exist" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.504411 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.571548 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.606961 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.613196 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.630828 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:55 crc kubenswrapper[4814]: E0216 10:08:55.631651 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="ceilometer-notification-agent" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.631675 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="ceilometer-notification-agent" Feb 16 10:08:55 crc kubenswrapper[4814]: E0216 10:08:55.631703 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="ceilometer-central-agent" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.631712 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="ceilometer-central-agent" Feb 16 10:08:55 crc kubenswrapper[4814]: E0216 10:08:55.631735 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="sg-core" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.631746 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="sg-core" Feb 16 10:08:55 crc kubenswrapper[4814]: E0216 10:08:55.631754 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="proxy-httpd" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.631762 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="proxy-httpd" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.631993 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="proxy-httpd" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.632011 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="ceilometer-central-agent" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.632025 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="ceilometer-notification-agent" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.632041 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" containerName="sg-core" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.634375 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.648248 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.648580 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.648784 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.684622 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.694995 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6tbb\" (UniqueName: \"kubernetes.io/projected/2ca029ee-1d79-4f76-bda8-235697a236f4-kube-api-access-r6tbb\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.695082 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.695174 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.695225 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.695268 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-scripts\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.695299 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-config-data\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.695389 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.695440 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.801216 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.801833 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.801882 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.801934 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-scripts\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.801966 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-config-data\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.802003 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.802053 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.802187 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6tbb\" (UniqueName: \"kubernetes.io/projected/2ca029ee-1d79-4f76-bda8-235697a236f4-kube-api-access-r6tbb\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.803267 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-run-httpd\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.808983 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-log-httpd\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.819339 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.819462 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.820550 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-scripts\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.828882 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-config-data\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.830665 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.832365 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6tbb\" (UniqueName: \"kubernetes.io/projected/2ca029ee-1d79-4f76-bda8-235697a236f4-kube-api-access-r6tbb\") pod \"ceilometer-0\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " pod="openstack/ceilometer-0" Feb 16 10:08:55 crc kubenswrapper[4814]: I0216 10:08:55.993525 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:08:56 crc kubenswrapper[4814]: I0216 10:08:56.066821 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 10:08:56 crc kubenswrapper[4814]: I0216 10:08:56.205170 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"77a0d3ee-2bcb-4733-89a8-b4525fc20768","Type":"ContainerStarted","Data":"e7f7616184e90a5cdecf7b1662512fc6d5158f242a21ee38d186f762c5120623"} Feb 16 10:08:56 crc kubenswrapper[4814]: I0216 10:08:56.207510 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:56 crc kubenswrapper[4814]: I0216 10:08:56.256051 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 16 10:08:56 crc kubenswrapper[4814]: I0216 10:08:56.597504 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:57 crc kubenswrapper[4814]: I0216 10:08:57.024660 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1077fff9-a5cf-4c63-a56e-5ac9f1705d6e" path="/var/lib/kubelet/pods/1077fff9-a5cf-4c63-a56e-5ac9f1705d6e/volumes" Feb 16 10:08:57 crc kubenswrapper[4814]: I0216 10:08:57.233673 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"77a0d3ee-2bcb-4733-89a8-b4525fc20768","Type":"ContainerStarted","Data":"2853f3e4caacc8820f3c03c4fb6d9f6ab1b5e83ad41ad61b52612684e52446de"} Feb 16 10:08:57 crc kubenswrapper[4814]: I0216 10:08:57.238196 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ca029ee-1d79-4f76-bda8-235697a236f4","Type":"ContainerStarted","Data":"671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6"} Feb 16 10:08:57 crc kubenswrapper[4814]: I0216 10:08:57.238250 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ca029ee-1d79-4f76-bda8-235697a236f4","Type":"ContainerStarted","Data":"e4f11c2c0f9465a662efcaadc323ba84be4ffdb66c97abfc9471c3ef8dc40612"} Feb 16 10:08:57 crc kubenswrapper[4814]: I0216 10:08:57.628035 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:08:57 crc kubenswrapper[4814]: I0216 10:08:57.643980 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="53476134-c469-4492-8ac7-3f2ed6a87247" containerName="glance-log" containerID="cri-o://5a047c65ef8377204dda383c58a3ad0482bbee4e87c0c02d4428d19bb842b518" gracePeriod=30 Feb 16 10:08:57 crc kubenswrapper[4814]: I0216 10:08:57.644958 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="53476134-c469-4492-8ac7-3f2ed6a87247" containerName="glance-httpd" containerID="cri-o://53dbded13800fb5aa93db6abfae70bc8e14a2f2f83ff1b4104bc25f7198d3a54" gracePeriod=30 Feb 16 10:08:58 crc kubenswrapper[4814]: E0216 10:08:58.106662 4814 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53476134_c469_4492_8ac7_3f2ed6a87247.slice/crio-5a047c65ef8377204dda383c58a3ad0482bbee4e87c0c02d4428d19bb842b518.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53476134_c469_4492_8ac7_3f2ed6a87247.slice/crio-conmon-5a047c65ef8377204dda383c58a3ad0482bbee4e87c0c02d4428d19bb842b518.scope\": RecentStats: unable to find data in memory cache]" Feb 16 10:08:58 crc kubenswrapper[4814]: I0216 10:08:58.252945 4814 generic.go:334] "Generic (PLEG): container finished" podID="53476134-c469-4492-8ac7-3f2ed6a87247" containerID="5a047c65ef8377204dda383c58a3ad0482bbee4e87c0c02d4428d19bb842b518" exitCode=143 Feb 16 10:08:58 crc kubenswrapper[4814]: I0216 10:08:58.253025 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53476134-c469-4492-8ac7-3f2ed6a87247","Type":"ContainerDied","Data":"5a047c65ef8377204dda383c58a3ad0482bbee4e87c0c02d4428d19bb842b518"} Feb 16 10:08:58 crc kubenswrapper[4814]: I0216 10:08:58.257346 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"77a0d3ee-2bcb-4733-89a8-b4525fc20768","Type":"ContainerStarted","Data":"8a9538c6223f02ac1c6fdd43d38ca3eb756a7469c921a0aa150c8188e83a55fc"} Feb 16 10:08:58 crc kubenswrapper[4814]: I0216 10:08:58.262490 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ca029ee-1d79-4f76-bda8-235697a236f4","Type":"ContainerStarted","Data":"ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126"} Feb 16 10:08:58 crc kubenswrapper[4814]: I0216 10:08:58.262552 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ca029ee-1d79-4f76-bda8-235697a236f4","Type":"ContainerStarted","Data":"4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c"} Feb 16 10:08:58 crc kubenswrapper[4814]: I0216 10:08:58.284607 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.284520331 podStartE2EDuration="4.284520331s" podCreationTimestamp="2026-02-16 10:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:08:58.279405041 +0000 UTC m=+1395.972561221" watchObservedRunningTime="2026-02-16 10:08:58.284520331 +0000 UTC m=+1395.977676511" Feb 16 10:08:58 crc kubenswrapper[4814]: I0216 10:08:58.559894 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 10:08:58 crc kubenswrapper[4814]: I0216 10:08:58.706175 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.695084 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-g68vm"] Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.698003 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g68vm" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.719479 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g68vm"] Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.824116 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-12af-account-create-update-cg5n4"] Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.829507 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-12af-account-create-update-cg5n4" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.839818 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.865363 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62jf7\" (UniqueName: \"kubernetes.io/projected/59dfc847-309b-4f50-8d29-9418ba80cbd7-kube-api-access-62jf7\") pod \"nova-api-db-create-g68vm\" (UID: \"59dfc847-309b-4f50-8d29-9418ba80cbd7\") " pod="openstack/nova-api-db-create-g68vm" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.865676 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dfc847-309b-4f50-8d29-9418ba80cbd7-operator-scripts\") pod \"nova-api-db-create-g68vm\" (UID: \"59dfc847-309b-4f50-8d29-9418ba80cbd7\") " pod="openstack/nova-api-db-create-g68vm" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.891253 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-l5qxl"] Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.894710 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l5qxl" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.938216 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-l5qxl"] Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.969174 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dfc847-309b-4f50-8d29-9418ba80cbd7-operator-scripts\") pod \"nova-api-db-create-g68vm\" (UID: \"59dfc847-309b-4f50-8d29-9418ba80cbd7\") " pod="openstack/nova-api-db-create-g68vm" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.969300 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5tcf\" (UniqueName: \"kubernetes.io/projected/9a291623-9f03-4157-b461-a3ece83a7c03-kube-api-access-h5tcf\") pod \"nova-cell0-db-create-l5qxl\" (UID: \"9a291623-9f03-4157-b461-a3ece83a7c03\") " pod="openstack/nova-cell0-db-create-l5qxl" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.969410 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a291623-9f03-4157-b461-a3ece83a7c03-operator-scripts\") pod \"nova-cell0-db-create-l5qxl\" (UID: \"9a291623-9f03-4157-b461-a3ece83a7c03\") " pod="openstack/nova-cell0-db-create-l5qxl" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.969604 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62jf7\" (UniqueName: \"kubernetes.io/projected/59dfc847-309b-4f50-8d29-9418ba80cbd7-kube-api-access-62jf7\") pod \"nova-api-db-create-g68vm\" (UID: \"59dfc847-309b-4f50-8d29-9418ba80cbd7\") " pod="openstack/nova-api-db-create-g68vm" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.970444 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dfc847-309b-4f50-8d29-9418ba80cbd7-operator-scripts\") pod \"nova-api-db-create-g68vm\" (UID: \"59dfc847-309b-4f50-8d29-9418ba80cbd7\") " pod="openstack/nova-api-db-create-g68vm" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.981206 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4cmf\" (UniqueName: \"kubernetes.io/projected/ae29d14a-c8e0-4754-98da-720dd05df22f-kube-api-access-q4cmf\") pod \"nova-api-12af-account-create-update-cg5n4\" (UID: \"ae29d14a-c8e0-4754-98da-720dd05df22f\") " pod="openstack/nova-api-12af-account-create-update-cg5n4" Feb 16 10:08:59 crc kubenswrapper[4814]: I0216 10:08:59.982092 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae29d14a-c8e0-4754-98da-720dd05df22f-operator-scripts\") pod \"nova-api-12af-account-create-update-cg5n4\" (UID: \"ae29d14a-c8e0-4754-98da-720dd05df22f\") " pod="openstack/nova-api-12af-account-create-update-cg5n4" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.002813 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-12af-account-create-update-cg5n4"] Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.056873 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62jf7\" (UniqueName: \"kubernetes.io/projected/59dfc847-309b-4f50-8d29-9418ba80cbd7-kube-api-access-62jf7\") pod \"nova-api-db-create-g68vm\" (UID: \"59dfc847-309b-4f50-8d29-9418ba80cbd7\") " pod="openstack/nova-api-db-create-g68vm" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.085345 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5tcf\" (UniqueName: \"kubernetes.io/projected/9a291623-9f03-4157-b461-a3ece83a7c03-kube-api-access-h5tcf\") pod \"nova-cell0-db-create-l5qxl\" (UID: \"9a291623-9f03-4157-b461-a3ece83a7c03\") " pod="openstack/nova-cell0-db-create-l5qxl" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.085472 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a291623-9f03-4157-b461-a3ece83a7c03-operator-scripts\") pod \"nova-cell0-db-create-l5qxl\" (UID: \"9a291623-9f03-4157-b461-a3ece83a7c03\") " pod="openstack/nova-cell0-db-create-l5qxl" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.085642 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4cmf\" (UniqueName: \"kubernetes.io/projected/ae29d14a-c8e0-4754-98da-720dd05df22f-kube-api-access-q4cmf\") pod \"nova-api-12af-account-create-update-cg5n4\" (UID: \"ae29d14a-c8e0-4754-98da-720dd05df22f\") " pod="openstack/nova-api-12af-account-create-update-cg5n4" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.085860 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae29d14a-c8e0-4754-98da-720dd05df22f-operator-scripts\") pod \"nova-api-12af-account-create-update-cg5n4\" (UID: \"ae29d14a-c8e0-4754-98da-720dd05df22f\") " pod="openstack/nova-api-12af-account-create-update-cg5n4" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.088986 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a291623-9f03-4157-b461-a3ece83a7c03-operator-scripts\") pod \"nova-cell0-db-create-l5qxl\" (UID: \"9a291623-9f03-4157-b461-a3ece83a7c03\") " pod="openstack/nova-cell0-db-create-l5qxl" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.092783 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae29d14a-c8e0-4754-98da-720dd05df22f-operator-scripts\") pod \"nova-api-12af-account-create-update-cg5n4\" (UID: \"ae29d14a-c8e0-4754-98da-720dd05df22f\") " pod="openstack/nova-api-12af-account-create-update-cg5n4" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.102923 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8cpcd"] Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.106519 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8cpcd" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.124793 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4cmf\" (UniqueName: \"kubernetes.io/projected/ae29d14a-c8e0-4754-98da-720dd05df22f-kube-api-access-q4cmf\") pod \"nova-api-12af-account-create-update-cg5n4\" (UID: \"ae29d14a-c8e0-4754-98da-720dd05df22f\") " pod="openstack/nova-api-12af-account-create-update-cg5n4" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.125316 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5tcf\" (UniqueName: \"kubernetes.io/projected/9a291623-9f03-4157-b461-a3ece83a7c03-kube-api-access-h5tcf\") pod \"nova-cell0-db-create-l5qxl\" (UID: \"9a291623-9f03-4157-b461-a3ece83a7c03\") " pod="openstack/nova-cell0-db-create-l5qxl" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.134001 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g68vm" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.138137 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8cpcd"] Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.179964 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ce84-account-create-update-pjk5v"] Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.182427 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.188826 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.192705 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-operator-scripts\") pod \"nova-cell1-db-create-8cpcd\" (UID: \"a47bbc4d-27c9-488a-814c-4223fcdc8c2c\") " pod="openstack/nova-cell1-db-create-8cpcd" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.192999 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2sm8\" (UniqueName: \"kubernetes.io/projected/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-kube-api-access-v2sm8\") pod \"nova-cell1-db-create-8cpcd\" (UID: \"a47bbc4d-27c9-488a-814c-4223fcdc8c2c\") " pod="openstack/nova-cell1-db-create-8cpcd" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.215629 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ce84-account-create-update-pjk5v"] Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.231861 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-12af-account-create-update-cg5n4" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.250342 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d25f-account-create-update-p2nqz"] Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.252782 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.258953 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.262845 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d25f-account-create-update-p2nqz"] Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.273434 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l5qxl" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.297659 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2sm8\" (UniqueName: \"kubernetes.io/projected/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-kube-api-access-v2sm8\") pod \"nova-cell1-db-create-8cpcd\" (UID: \"a47bbc4d-27c9-488a-814c-4223fcdc8c2c\") " pod="openstack/nova-cell1-db-create-8cpcd" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.297723 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mspbt\" (UniqueName: \"kubernetes.io/projected/481f2ffd-8a55-4bb8-bbac-f0862c645d53-kube-api-access-mspbt\") pod \"nova-cell0-ce84-account-create-update-pjk5v\" (UID: \"481f2ffd-8a55-4bb8-bbac-f0862c645d53\") " pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.297910 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-operator-scripts\") pod \"nova-cell1-db-create-8cpcd\" (UID: \"a47bbc4d-27c9-488a-814c-4223fcdc8c2c\") " pod="openstack/nova-cell1-db-create-8cpcd" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.297949 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481f2ffd-8a55-4bb8-bbac-f0862c645d53-operator-scripts\") pod \"nova-cell0-ce84-account-create-update-pjk5v\" (UID: \"481f2ffd-8a55-4bb8-bbac-f0862c645d53\") " pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.299276 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-operator-scripts\") pod \"nova-cell1-db-create-8cpcd\" (UID: \"a47bbc4d-27c9-488a-814c-4223fcdc8c2c\") " pod="openstack/nova-cell1-db-create-8cpcd" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.349397 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2sm8\" (UniqueName: \"kubernetes.io/projected/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-kube-api-access-v2sm8\") pod \"nova-cell1-db-create-8cpcd\" (UID: \"a47bbc4d-27c9-488a-814c-4223fcdc8c2c\") " pod="openstack/nova-cell1-db-create-8cpcd" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.409095 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481f2ffd-8a55-4bb8-bbac-f0862c645d53-operator-scripts\") pod \"nova-cell0-ce84-account-create-update-pjk5v\" (UID: \"481f2ffd-8a55-4bb8-bbac-f0862c645d53\") " pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.409339 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mspbt\" (UniqueName: \"kubernetes.io/projected/481f2ffd-8a55-4bb8-bbac-f0862c645d53-kube-api-access-mspbt\") pod \"nova-cell0-ce84-account-create-update-pjk5v\" (UID: \"481f2ffd-8a55-4bb8-bbac-f0862c645d53\") " pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.409438 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhlq7\" (UniqueName: \"kubernetes.io/projected/0f31668d-a857-480f-b05a-fa46298ea10e-kube-api-access-fhlq7\") pod \"nova-cell1-d25f-account-create-update-p2nqz\" (UID: \"0f31668d-a857-480f-b05a-fa46298ea10e\") " pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.409477 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f31668d-a857-480f-b05a-fa46298ea10e-operator-scripts\") pod \"nova-cell1-d25f-account-create-update-p2nqz\" (UID: \"0f31668d-a857-480f-b05a-fa46298ea10e\") " pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.410956 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481f2ffd-8a55-4bb8-bbac-f0862c645d53-operator-scripts\") pod \"nova-cell0-ce84-account-create-update-pjk5v\" (UID: \"481f2ffd-8a55-4bb8-bbac-f0862c645d53\") " pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.447246 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.458366 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mspbt\" (UniqueName: \"kubernetes.io/projected/481f2ffd-8a55-4bb8-bbac-f0862c645d53-kube-api-access-mspbt\") pod \"nova-cell0-ce84-account-create-update-pjk5v\" (UID: \"481f2ffd-8a55-4bb8-bbac-f0862c645d53\") " pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.492307 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8cpcd" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.514448 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhlq7\" (UniqueName: \"kubernetes.io/projected/0f31668d-a857-480f-b05a-fa46298ea10e-kube-api-access-fhlq7\") pod \"nova-cell1-d25f-account-create-update-p2nqz\" (UID: \"0f31668d-a857-480f-b05a-fa46298ea10e\") " pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.514514 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f31668d-a857-480f-b05a-fa46298ea10e-operator-scripts\") pod \"nova-cell1-d25f-account-create-update-p2nqz\" (UID: \"0f31668d-a857-480f-b05a-fa46298ea10e\") " pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.520128 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f31668d-a857-480f-b05a-fa46298ea10e-operator-scripts\") pod \"nova-cell1-d25f-account-create-update-p2nqz\" (UID: \"0f31668d-a857-480f-b05a-fa46298ea10e\") " pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.540480 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.557125 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.572660 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhlq7\" (UniqueName: \"kubernetes.io/projected/0f31668d-a857-480f-b05a-fa46298ea10e-kube-api-access-fhlq7\") pod \"nova-cell1-d25f-account-create-update-p2nqz\" (UID: \"0f31668d-a857-480f-b05a-fa46298ea10e\") " pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.867355 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" Feb 16 10:09:00 crc kubenswrapper[4814]: I0216 10:09:00.965112 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-g68vm"] Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.188248 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7zqmw"] Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.237812 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-12af-account-create-update-cg5n4"] Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.260095 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8cpcd"] Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.502842 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-12af-account-create-update-cg5n4" event={"ID":"ae29d14a-c8e0-4754-98da-720dd05df22f","Type":"ContainerStarted","Data":"04c11ddb1894e3510f415d92d66b0781957f3b640e6b8474fce3da2158839a22"} Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.506602 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-l5qxl"] Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.519624 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ca029ee-1d79-4f76-bda8-235697a236f4","Type":"ContainerStarted","Data":"1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd"} Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.519892 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="ceilometer-central-agent" containerID="cri-o://671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6" gracePeriod=30 Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.520098 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.520638 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="proxy-httpd" containerID="cri-o://1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd" gracePeriod=30 Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.520704 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="sg-core" containerID="cri-o://ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126" gracePeriod=30 Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.520773 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="ceilometer-notification-agent" containerID="cri-o://4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c" gracePeriod=30 Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.521569 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ce84-account-create-update-pjk5v"] Feb 16 10:09:01 crc kubenswrapper[4814]: W0216 10:09:01.521783 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a291623_9f03_4157_b461_a3ece83a7c03.slice/crio-d5eebf3f9d4a5e6af421b38fd1f8f84a9c387cd8d9c188e4e8a7eabfbec7b3bb WatchSource:0}: Error finding container d5eebf3f9d4a5e6af421b38fd1f8f84a9c387cd8d9c188e4e8a7eabfbec7b3bb: Status 404 returned error can't find the container with id d5eebf3f9d4a5e6af421b38fd1f8f84a9c387cd8d9c188e4e8a7eabfbec7b3bb Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.558245 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8cpcd" event={"ID":"a47bbc4d-27c9-488a-814c-4223fcdc8c2c","Type":"ContainerStarted","Data":"e12a3321ac5e7c845fc9f379e4713caec18f24dce2123e19cf4d75173eb2d3fd"} Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.570902 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g68vm" event={"ID":"59dfc847-309b-4f50-8d29-9418ba80cbd7","Type":"ContainerStarted","Data":"585f5d016453ea99cc662d05ab2b20c135274d9d14c7b88add431dba1beb4299"} Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.570958 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g68vm" event={"ID":"59dfc847-309b-4f50-8d29-9418ba80cbd7","Type":"ContainerStarted","Data":"7e45d1c6ff8feae053d6d069770da93cb924ddf1e7d566721942e8469e7783e0"} Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.586438 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.628329754 podStartE2EDuration="6.586413435s" podCreationTimestamp="2026-02-16 10:08:55 +0000 UTC" firstStartedPulling="2026-02-16 10:08:56.619322878 +0000 UTC m=+1394.312479058" lastFinishedPulling="2026-02-16 10:08:59.577406569 +0000 UTC m=+1397.270562739" observedRunningTime="2026-02-16 10:09:01.558748318 +0000 UTC m=+1399.251904508" watchObservedRunningTime="2026-02-16 10:09:01.586413435 +0000 UTC m=+1399.279569605" Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.611611 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-g68vm" podStartSLOduration=2.611585754 podStartE2EDuration="2.611585754s" podCreationTimestamp="2026-02-16 10:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:09:01.604450299 +0000 UTC m=+1399.297606479" watchObservedRunningTime="2026-02-16 10:09:01.611585754 +0000 UTC m=+1399.304741934" Feb 16 10:09:01 crc kubenswrapper[4814]: I0216 10:09:01.657854 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d25f-account-create-update-p2nqz"] Feb 16 10:09:01 crc kubenswrapper[4814]: W0216 10:09:01.673705 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f31668d_a857_480f_b05a_fa46298ea10e.slice/crio-23f3cb87d41b31f4b1647649b2ff3fcebf14037ca20ee1c03448521b36645d14 WatchSource:0}: Error finding container 23f3cb87d41b31f4b1647649b2ff3fcebf14037ca20ee1c03448521b36645d14: Status 404 returned error can't find the container with id 23f3cb87d41b31f4b1647649b2ff3fcebf14037ca20ee1c03448521b36645d14 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.589840 4814 generic.go:334] "Generic (PLEG): container finished" podID="a47bbc4d-27c9-488a-814c-4223fcdc8c2c" containerID="687c02b54e05e906a8df2893690d5ca2b00e63a32dfc35086b744c5e27be5e70" exitCode=0 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.590377 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8cpcd" event={"ID":"a47bbc4d-27c9-488a-814c-4223fcdc8c2c","Type":"ContainerDied","Data":"687c02b54e05e906a8df2893690d5ca2b00e63a32dfc35086b744c5e27be5e70"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.598234 4814 generic.go:334] "Generic (PLEG): container finished" podID="0f31668d-a857-480f-b05a-fa46298ea10e" containerID="7fc144d6f269de87ecf431b69b81c00f96ac5606c279b416d72cd2a2d9d279cd" exitCode=0 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.598336 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" event={"ID":"0f31668d-a857-480f-b05a-fa46298ea10e","Type":"ContainerDied","Data":"7fc144d6f269de87ecf431b69b81c00f96ac5606c279b416d72cd2a2d9d279cd"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.598378 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" event={"ID":"0f31668d-a857-480f-b05a-fa46298ea10e","Type":"ContainerStarted","Data":"23f3cb87d41b31f4b1647649b2ff3fcebf14037ca20ee1c03448521b36645d14"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.606189 4814 generic.go:334] "Generic (PLEG): container finished" podID="59dfc847-309b-4f50-8d29-9418ba80cbd7" containerID="585f5d016453ea99cc662d05ab2b20c135274d9d14c7b88add431dba1beb4299" exitCode=0 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.606278 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g68vm" event={"ID":"59dfc847-309b-4f50-8d29-9418ba80cbd7","Type":"ContainerDied","Data":"585f5d016453ea99cc662d05ab2b20c135274d9d14c7b88add431dba1beb4299"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.626100 4814 generic.go:334] "Generic (PLEG): container finished" podID="9a291623-9f03-4157-b461-a3ece83a7c03" containerID="49f6d9c013b882a0522784f8bf08e9ac5d9d491646e6654b34dc68fadcb5b95b" exitCode=0 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.626260 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l5qxl" event={"ID":"9a291623-9f03-4157-b461-a3ece83a7c03","Type":"ContainerDied","Data":"49f6d9c013b882a0522784f8bf08e9ac5d9d491646e6654b34dc68fadcb5b95b"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.626339 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l5qxl" event={"ID":"9a291623-9f03-4157-b461-a3ece83a7c03","Type":"ContainerStarted","Data":"d5eebf3f9d4a5e6af421b38fd1f8f84a9c387cd8d9c188e4e8a7eabfbec7b3bb"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.634787 4814 generic.go:334] "Generic (PLEG): container finished" podID="53476134-c469-4492-8ac7-3f2ed6a87247" containerID="53dbded13800fb5aa93db6abfae70bc8e14a2f2f83ff1b4104bc25f7198d3a54" exitCode=0 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.634861 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53476134-c469-4492-8ac7-3f2ed6a87247","Type":"ContainerDied","Data":"53dbded13800fb5aa93db6abfae70bc8e14a2f2f83ff1b4104bc25f7198d3a54"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.645641 4814 generic.go:334] "Generic (PLEG): container finished" podID="ae29d14a-c8e0-4754-98da-720dd05df22f" containerID="aa1947088f412c86c9e4d41e75f4670dc3ebce09e2818308c5d3362b6fa8c7fb" exitCode=0 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.645712 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-12af-account-create-update-cg5n4" event={"ID":"ae29d14a-c8e0-4754-98da-720dd05df22f","Type":"ContainerDied","Data":"aa1947088f412c86c9e4d41e75f4670dc3ebce09e2818308c5d3362b6fa8c7fb"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.659971 4814 generic.go:334] "Generic (PLEG): container finished" podID="481f2ffd-8a55-4bb8-bbac-f0862c645d53" containerID="14c1dff884c79f91621465ed2c72ba18db7ad0965d3c2934d8d2ebf1203f7939" exitCode=0 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.660210 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" event={"ID":"481f2ffd-8a55-4bb8-bbac-f0862c645d53","Type":"ContainerDied","Data":"14c1dff884c79f91621465ed2c72ba18db7ad0965d3c2934d8d2ebf1203f7939"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.660251 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" event={"ID":"481f2ffd-8a55-4bb8-bbac-f0862c645d53","Type":"ContainerStarted","Data":"e4360c929498616928de3adbe9187f86a3e25f8d6a0d253638bff06d9796c244"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.671502 4814 generic.go:334] "Generic (PLEG): container finished" podID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerID="1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd" exitCode=0 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.671568 4814 generic.go:334] "Generic (PLEG): container finished" podID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerID="ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126" exitCode=2 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.671577 4814 generic.go:334] "Generic (PLEG): container finished" podID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerID="4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c" exitCode=0 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.672218 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7zqmw" podUID="7d9f6435-c405-4209-9a92-26f39daf2909" containerName="registry-server" containerID="cri-o://88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99" gracePeriod=2 Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.672403 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ca029ee-1d79-4f76-bda8-235697a236f4","Type":"ContainerDied","Data":"1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.672458 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ca029ee-1d79-4f76-bda8-235697a236f4","Type":"ContainerDied","Data":"ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.672472 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ca029ee-1d79-4f76-bda8-235697a236f4","Type":"ContainerDied","Data":"4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c"} Feb 16 10:09:02 crc kubenswrapper[4814]: I0216 10:09:02.939111 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.122636 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-httpd-run\") pod \"53476134-c469-4492-8ac7-3f2ed6a87247\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.122706 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-scripts\") pod \"53476134-c469-4492-8ac7-3f2ed6a87247\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.122782 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-combined-ca-bundle\") pod \"53476134-c469-4492-8ac7-3f2ed6a87247\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.122844 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-internal-tls-certs\") pod \"53476134-c469-4492-8ac7-3f2ed6a87247\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.123012 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqk6d\" (UniqueName: \"kubernetes.io/projected/53476134-c469-4492-8ac7-3f2ed6a87247-kube-api-access-cqk6d\") pod \"53476134-c469-4492-8ac7-3f2ed6a87247\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.123063 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"53476134-c469-4492-8ac7-3f2ed6a87247\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.123088 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-logs\") pod \"53476134-c469-4492-8ac7-3f2ed6a87247\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.123376 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-config-data\") pod \"53476134-c469-4492-8ac7-3f2ed6a87247\" (UID: \"53476134-c469-4492-8ac7-3f2ed6a87247\") " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.123980 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "53476134-c469-4492-8ac7-3f2ed6a87247" (UID: "53476134-c469-4492-8ac7-3f2ed6a87247"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.124591 4814 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.130062 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-logs" (OuterVolumeSpecName: "logs") pod "53476134-c469-4492-8ac7-3f2ed6a87247" (UID: "53476134-c469-4492-8ac7-3f2ed6a87247"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.133627 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53476134-c469-4492-8ac7-3f2ed6a87247-kube-api-access-cqk6d" (OuterVolumeSpecName: "kube-api-access-cqk6d") pod "53476134-c469-4492-8ac7-3f2ed6a87247" (UID: "53476134-c469-4492-8ac7-3f2ed6a87247"). InnerVolumeSpecName "kube-api-access-cqk6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.133988 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "53476134-c469-4492-8ac7-3f2ed6a87247" (UID: "53476134-c469-4492-8ac7-3f2ed6a87247"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.137256 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-scripts" (OuterVolumeSpecName: "scripts") pod "53476134-c469-4492-8ac7-3f2ed6a87247" (UID: "53476134-c469-4492-8ac7-3f2ed6a87247"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.174859 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53476134-c469-4492-8ac7-3f2ed6a87247" (UID: "53476134-c469-4492-8ac7-3f2ed6a87247"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.226519 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqk6d\" (UniqueName: \"kubernetes.io/projected/53476134-c469-4492-8ac7-3f2ed6a87247-kube-api-access-cqk6d\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.226591 4814 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.226604 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53476134-c469-4492-8ac7-3f2ed6a87247-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.226612 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.226621 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.227665 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "53476134-c469-4492-8ac7-3f2ed6a87247" (UID: "53476134-c469-4492-8ac7-3f2ed6a87247"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.233250 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.249497 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-config-data" (OuterVolumeSpecName: "config-data") pod "53476134-c469-4492-8ac7-3f2ed6a87247" (UID: "53476134-c469-4492-8ac7-3f2ed6a87247"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.259154 4814 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.345167 4814 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.345234 4814 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.345250 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53476134-c469-4492-8ac7-3f2ed6a87247-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.446871 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnnk6\" (UniqueName: \"kubernetes.io/projected/7d9f6435-c405-4209-9a92-26f39daf2909-kube-api-access-rnnk6\") pod \"7d9f6435-c405-4209-9a92-26f39daf2909\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.447071 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-catalog-content\") pod \"7d9f6435-c405-4209-9a92-26f39daf2909\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.447168 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-utilities\") pod \"7d9f6435-c405-4209-9a92-26f39daf2909\" (UID: \"7d9f6435-c405-4209-9a92-26f39daf2909\") " Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.448448 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-utilities" (OuterVolumeSpecName: "utilities") pod "7d9f6435-c405-4209-9a92-26f39daf2909" (UID: "7d9f6435-c405-4209-9a92-26f39daf2909"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.451427 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d9f6435-c405-4209-9a92-26f39daf2909-kube-api-access-rnnk6" (OuterVolumeSpecName: "kube-api-access-rnnk6") pod "7d9f6435-c405-4209-9a92-26f39daf2909" (UID: "7d9f6435-c405-4209-9a92-26f39daf2909"). InnerVolumeSpecName "kube-api-access-rnnk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.557159 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.557227 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnnk6\" (UniqueName: \"kubernetes.io/projected/7d9f6435-c405-4209-9a92-26f39daf2909-kube-api-access-rnnk6\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.618627 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d9f6435-c405-4209-9a92-26f39daf2909" (UID: "7d9f6435-c405-4209-9a92-26f39daf2909"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.659698 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9f6435-c405-4209-9a92-26f39daf2909-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.693950 4814 generic.go:334] "Generic (PLEG): container finished" podID="7d9f6435-c405-4209-9a92-26f39daf2909" containerID="88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99" exitCode=0 Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.694030 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zqmw" event={"ID":"7d9f6435-c405-4209-9a92-26f39daf2909","Type":"ContainerDied","Data":"88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99"} Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.694106 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zqmw" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.694144 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zqmw" event={"ID":"7d9f6435-c405-4209-9a92-26f39daf2909","Type":"ContainerDied","Data":"6c0a0c77b23b85647334332c5aab3c690c05ea754a89e8cae1dac7e515427ff5"} Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.694172 4814 scope.go:117] "RemoveContainer" containerID="88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.700039 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53476134-c469-4492-8ac7-3f2ed6a87247","Type":"ContainerDied","Data":"c7ef1637cbc168dfe8eadfe06c3237ca297bfecf1b2b6de56fa4183d63209b96"} Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.700151 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.768470 4814 scope.go:117] "RemoveContainer" containerID="29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.785760 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.817631 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.869157 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7zqmw"] Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.896140 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7zqmw"] Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.903513 4814 scope.go:117] "RemoveContainer" containerID="bb9c120521c97784b3c008b06ee39456ae24a51cd194172c1a3cf6b5bca91331" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.945739 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:09:03 crc kubenswrapper[4814]: E0216 10:09:03.946358 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53476134-c469-4492-8ac7-3f2ed6a87247" containerName="glance-httpd" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.946380 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53476134-c469-4492-8ac7-3f2ed6a87247" containerName="glance-httpd" Feb 16 10:09:03 crc kubenswrapper[4814]: E0216 10:09:03.946392 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9f6435-c405-4209-9a92-26f39daf2909" containerName="extract-content" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.946400 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9f6435-c405-4209-9a92-26f39daf2909" containerName="extract-content" Feb 16 10:09:03 crc kubenswrapper[4814]: E0216 10:09:03.946418 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9f6435-c405-4209-9a92-26f39daf2909" containerName="registry-server" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.946424 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9f6435-c405-4209-9a92-26f39daf2909" containerName="registry-server" Feb 16 10:09:03 crc kubenswrapper[4814]: E0216 10:09:03.946447 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9f6435-c405-4209-9a92-26f39daf2909" containerName="extract-utilities" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.946454 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9f6435-c405-4209-9a92-26f39daf2909" containerName="extract-utilities" Feb 16 10:09:03 crc kubenswrapper[4814]: E0216 10:09:03.946474 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53476134-c469-4492-8ac7-3f2ed6a87247" containerName="glance-log" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.946479 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="53476134-c469-4492-8ac7-3f2ed6a87247" containerName="glance-log" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.946678 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53476134-c469-4492-8ac7-3f2ed6a87247" containerName="glance-log" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.946698 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="53476134-c469-4492-8ac7-3f2ed6a87247" containerName="glance-httpd" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.946719 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d9f6435-c405-4209-9a92-26f39daf2909" containerName="registry-server" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.948128 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.954886 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 10:09:03 crc kubenswrapper[4814]: I0216 10:09:03.955223 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.000148 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.006616 4814 scope.go:117] "RemoveContainer" containerID="88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99" Feb 16 10:09:04 crc kubenswrapper[4814]: E0216 10:09:04.007285 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99\": container with ID starting with 88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99 not found: ID does not exist" containerID="88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.007334 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99"} err="failed to get container status \"88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99\": rpc error: code = NotFound desc = could not find container \"88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99\": container with ID starting with 88aab0ee05f2ed93ac3af8dfdf7b0b83bcfacd4731bc6ce0908dc944801afc99 not found: ID does not exist" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.007369 4814 scope.go:117] "RemoveContainer" containerID="29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48" Feb 16 10:09:04 crc kubenswrapper[4814]: E0216 10:09:04.008117 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48\": container with ID starting with 29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48 not found: ID does not exist" containerID="29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.008167 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48"} err="failed to get container status \"29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48\": rpc error: code = NotFound desc = could not find container \"29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48\": container with ID starting with 29cbab77f0304db8cbfbf450cd8a884c19eb31189310cc815ff5e97fca70ee48 not found: ID does not exist" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.008201 4814 scope.go:117] "RemoveContainer" containerID="bb9c120521c97784b3c008b06ee39456ae24a51cd194172c1a3cf6b5bca91331" Feb 16 10:09:04 crc kubenswrapper[4814]: E0216 10:09:04.008698 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb9c120521c97784b3c008b06ee39456ae24a51cd194172c1a3cf6b5bca91331\": container with ID starting with bb9c120521c97784b3c008b06ee39456ae24a51cd194172c1a3cf6b5bca91331 not found: ID does not exist" containerID="bb9c120521c97784b3c008b06ee39456ae24a51cd194172c1a3cf6b5bca91331" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.008765 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb9c120521c97784b3c008b06ee39456ae24a51cd194172c1a3cf6b5bca91331"} err="failed to get container status \"bb9c120521c97784b3c008b06ee39456ae24a51cd194172c1a3cf6b5bca91331\": rpc error: code = NotFound desc = could not find container \"bb9c120521c97784b3c008b06ee39456ae24a51cd194172c1a3cf6b5bca91331\": container with ID starting with bb9c120521c97784b3c008b06ee39456ae24a51cd194172c1a3cf6b5bca91331 not found: ID does not exist" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.008809 4814 scope.go:117] "RemoveContainer" containerID="53dbded13800fb5aa93db6abfae70bc8e14a2f2f83ff1b4104bc25f7198d3a54" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.078691 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.078768 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.078811 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.078837 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e69bc859-f4a3-4e24-92be-cbe76d3faee4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.078872 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psklw\" (UniqueName: \"kubernetes.io/projected/e69bc859-f4a3-4e24-92be-cbe76d3faee4-kube-api-access-psklw\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.078932 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.078981 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.079017 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e69bc859-f4a3-4e24-92be-cbe76d3faee4-logs\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.095436 4814 scope.go:117] "RemoveContainer" containerID="5a047c65ef8377204dda383c58a3ad0482bbee4e87c0c02d4428d19bb842b518" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.181498 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.181775 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.181862 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e69bc859-f4a3-4e24-92be-cbe76d3faee4-logs\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.182020 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.182146 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.182216 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.182260 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e69bc859-f4a3-4e24-92be-cbe76d3faee4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.182402 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psklw\" (UniqueName: \"kubernetes.io/projected/e69bc859-f4a3-4e24-92be-cbe76d3faee4-kube-api-access-psklw\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.182937 4814 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.184357 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e69bc859-f4a3-4e24-92be-cbe76d3faee4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.186568 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e69bc859-f4a3-4e24-92be-cbe76d3faee4-logs\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.195608 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.202578 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.204165 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.212883 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69bc859-f4a3-4e24-92be-cbe76d3faee4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.213098 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psklw\" (UniqueName: \"kubernetes.io/projected/e69bc859-f4a3-4e24-92be-cbe76d3faee4-kube-api-access-psklw\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.240037 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e69bc859-f4a3-4e24-92be-cbe76d3faee4\") " pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.337763 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.383762 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8cpcd" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.509513 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2sm8\" (UniqueName: \"kubernetes.io/projected/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-kube-api-access-v2sm8\") pod \"a47bbc4d-27c9-488a-814c-4223fcdc8c2c\" (UID: \"a47bbc4d-27c9-488a-814c-4223fcdc8c2c\") " Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.509732 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-operator-scripts\") pod \"a47bbc4d-27c9-488a-814c-4223fcdc8c2c\" (UID: \"a47bbc4d-27c9-488a-814c-4223fcdc8c2c\") " Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.511072 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a47bbc4d-27c9-488a-814c-4223fcdc8c2c" (UID: "a47bbc4d-27c9-488a-814c-4223fcdc8c2c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.527788 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-kube-api-access-v2sm8" (OuterVolumeSpecName: "kube-api-access-v2sm8") pod "a47bbc4d-27c9-488a-814c-4223fcdc8c2c" (UID: "a47bbc4d-27c9-488a-814c-4223fcdc8c2c"). InnerVolumeSpecName "kube-api-access-v2sm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.557318 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l5qxl" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.570020 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-12af-account-create-update-cg5n4" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.613461 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2sm8\" (UniqueName: \"kubernetes.io/projected/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-kube-api-access-v2sm8\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.613870 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a47bbc4d-27c9-488a-814c-4223fcdc8c2c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.670663 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.714846 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae29d14a-c8e0-4754-98da-720dd05df22f-operator-scripts\") pod \"ae29d14a-c8e0-4754-98da-720dd05df22f\" (UID: \"ae29d14a-c8e0-4754-98da-720dd05df22f\") " Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.715149 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a291623-9f03-4157-b461-a3ece83a7c03-operator-scripts\") pod \"9a291623-9f03-4157-b461-a3ece83a7c03\" (UID: \"9a291623-9f03-4157-b461-a3ece83a7c03\") " Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.715304 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5tcf\" (UniqueName: \"kubernetes.io/projected/9a291623-9f03-4157-b461-a3ece83a7c03-kube-api-access-h5tcf\") pod \"9a291623-9f03-4157-b461-a3ece83a7c03\" (UID: \"9a291623-9f03-4157-b461-a3ece83a7c03\") " Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.715378 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4cmf\" (UniqueName: \"kubernetes.io/projected/ae29d14a-c8e0-4754-98da-720dd05df22f-kube-api-access-q4cmf\") pod \"ae29d14a-c8e0-4754-98da-720dd05df22f\" (UID: \"ae29d14a-c8e0-4754-98da-720dd05df22f\") " Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.716295 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a291623-9f03-4157-b461-a3ece83a7c03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a291623-9f03-4157-b461-a3ece83a7c03" (UID: "9a291623-9f03-4157-b461-a3ece83a7c03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.716358 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae29d14a-c8e0-4754-98da-720dd05df22f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae29d14a-c8e0-4754-98da-720dd05df22f" (UID: "ae29d14a-c8e0-4754-98da-720dd05df22f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.717944 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" event={"ID":"0f31668d-a857-480f-b05a-fa46298ea10e","Type":"ContainerDied","Data":"23f3cb87d41b31f4b1647649b2ff3fcebf14037ca20ee1c03448521b36645d14"} Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.717984 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23f3cb87d41b31f4b1647649b2ff3fcebf14037ca20ee1c03448521b36645d14" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.718522 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.741710 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae29d14a-c8e0-4754-98da-720dd05df22f-kube-api-access-q4cmf" (OuterVolumeSpecName: "kube-api-access-q4cmf") pod "ae29d14a-c8e0-4754-98da-720dd05df22f" (UID: "ae29d14a-c8e0-4754-98da-720dd05df22f"). InnerVolumeSpecName "kube-api-access-q4cmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.742737 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-l5qxl" event={"ID":"9a291623-9f03-4157-b461-a3ece83a7c03","Type":"ContainerDied","Data":"d5eebf3f9d4a5e6af421b38fd1f8f84a9c387cd8d9c188e4e8a7eabfbec7b3bb"} Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.742792 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5eebf3f9d4a5e6af421b38fd1f8f84a9c387cd8d9c188e4e8a7eabfbec7b3bb" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.742858 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-l5qxl" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.754630 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-12af-account-create-update-cg5n4" event={"ID":"ae29d14a-c8e0-4754-98da-720dd05df22f","Type":"ContainerDied","Data":"04c11ddb1894e3510f415d92d66b0781957f3b640e6b8474fce3da2158839a22"} Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.754685 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04c11ddb1894e3510f415d92d66b0781957f3b640e6b8474fce3da2158839a22" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.754794 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-12af-account-create-update-cg5n4" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.766404 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.766464 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ce84-account-create-update-pjk5v" event={"ID":"481f2ffd-8a55-4bb8-bbac-f0862c645d53","Type":"ContainerDied","Data":"e4360c929498616928de3adbe9187f86a3e25f8d6a0d253638bff06d9796c244"} Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.766589 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4360c929498616928de3adbe9187f86a3e25f8d6a0d253638bff06d9796c244" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.774277 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8cpcd" event={"ID":"a47bbc4d-27c9-488a-814c-4223fcdc8c2c","Type":"ContainerDied","Data":"e12a3321ac5e7c845fc9f379e4713caec18f24dce2123e19cf4d75173eb2d3fd"} Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.774366 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e12a3321ac5e7c845fc9f379e4713caec18f24dce2123e19cf4d75173eb2d3fd" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.774416 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8cpcd" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.784233 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a291623-9f03-4157-b461-a3ece83a7c03-kube-api-access-h5tcf" (OuterVolumeSpecName: "kube-api-access-h5tcf") pod "9a291623-9f03-4157-b461-a3ece83a7c03" (UID: "9a291623-9f03-4157-b461-a3ece83a7c03"). InnerVolumeSpecName "kube-api-access-h5tcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.817681 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481f2ffd-8a55-4bb8-bbac-f0862c645d53-operator-scripts\") pod \"481f2ffd-8a55-4bb8-bbac-f0862c645d53\" (UID: \"481f2ffd-8a55-4bb8-bbac-f0862c645d53\") " Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.817933 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mspbt\" (UniqueName: \"kubernetes.io/projected/481f2ffd-8a55-4bb8-bbac-f0862c645d53-kube-api-access-mspbt\") pod \"481f2ffd-8a55-4bb8-bbac-f0862c645d53\" (UID: \"481f2ffd-8a55-4bb8-bbac-f0862c645d53\") " Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.818282 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f31668d-a857-480f-b05a-fa46298ea10e-operator-scripts\") pod \"0f31668d-a857-480f-b05a-fa46298ea10e\" (UID: \"0f31668d-a857-480f-b05a-fa46298ea10e\") " Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.818365 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhlq7\" (UniqueName: \"kubernetes.io/projected/0f31668d-a857-480f-b05a-fa46298ea10e-kube-api-access-fhlq7\") pod \"0f31668d-a857-480f-b05a-fa46298ea10e\" (UID: \"0f31668d-a857-480f-b05a-fa46298ea10e\") " Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.819525 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a291623-9f03-4157-b461-a3ece83a7c03-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.819567 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5tcf\" (UniqueName: \"kubernetes.io/projected/9a291623-9f03-4157-b461-a3ece83a7c03-kube-api-access-h5tcf\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.819598 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4cmf\" (UniqueName: \"kubernetes.io/projected/ae29d14a-c8e0-4754-98da-720dd05df22f-kube-api-access-q4cmf\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.819610 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae29d14a-c8e0-4754-98da-720dd05df22f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.819903 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f31668d-a857-480f-b05a-fa46298ea10e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f31668d-a857-480f-b05a-fa46298ea10e" (UID: "0f31668d-a857-480f-b05a-fa46298ea10e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.820409 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481f2ffd-8a55-4bb8-bbac-f0862c645d53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "481f2ffd-8a55-4bb8-bbac-f0862c645d53" (UID: "481f2ffd-8a55-4bb8-bbac-f0862c645d53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.823242 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481f2ffd-8a55-4bb8-bbac-f0862c645d53-kube-api-access-mspbt" (OuterVolumeSpecName: "kube-api-access-mspbt") pod "481f2ffd-8a55-4bb8-bbac-f0862c645d53" (UID: "481f2ffd-8a55-4bb8-bbac-f0862c645d53"). InnerVolumeSpecName "kube-api-access-mspbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.824065 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f31668d-a857-480f-b05a-fa46298ea10e-kube-api-access-fhlq7" (OuterVolumeSpecName: "kube-api-access-fhlq7") pod "0f31668d-a857-480f-b05a-fa46298ea10e" (UID: "0f31668d-a857-480f-b05a-fa46298ea10e"). InnerVolumeSpecName "kube-api-access-fhlq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.921920 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481f2ffd-8a55-4bb8-bbac-f0862c645d53-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.921976 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mspbt\" (UniqueName: \"kubernetes.io/projected/481f2ffd-8a55-4bb8-bbac-f0862c645d53-kube-api-access-mspbt\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.921991 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f31668d-a857-480f-b05a-fa46298ea10e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.922004 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhlq7\" (UniqueName: \"kubernetes.io/projected/0f31668d-a857-480f-b05a-fa46298ea10e-kube-api-access-fhlq7\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:04 crc kubenswrapper[4814]: I0216 10:09:04.990474 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g68vm" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.036045 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53476134-c469-4492-8ac7-3f2ed6a87247" path="/var/lib/kubelet/pods/53476134-c469-4492-8ac7-3f2ed6a87247/volumes" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.037183 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d9f6435-c405-4209-9a92-26f39daf2909" path="/var/lib/kubelet/pods/7d9f6435-c405-4209-9a92-26f39daf2909/volumes" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.142403 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62jf7\" (UniqueName: \"kubernetes.io/projected/59dfc847-309b-4f50-8d29-9418ba80cbd7-kube-api-access-62jf7\") pod \"59dfc847-309b-4f50-8d29-9418ba80cbd7\" (UID: \"59dfc847-309b-4f50-8d29-9418ba80cbd7\") " Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.143098 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dfc847-309b-4f50-8d29-9418ba80cbd7-operator-scripts\") pod \"59dfc847-309b-4f50-8d29-9418ba80cbd7\" (UID: \"59dfc847-309b-4f50-8d29-9418ba80cbd7\") " Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.148662 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59dfc847-309b-4f50-8d29-9418ba80cbd7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59dfc847-309b-4f50-8d29-9418ba80cbd7" (UID: "59dfc847-309b-4f50-8d29-9418ba80cbd7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.153894 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59dfc847-309b-4f50-8d29-9418ba80cbd7-kube-api-access-62jf7" (OuterVolumeSpecName: "kube-api-access-62jf7") pod "59dfc847-309b-4f50-8d29-9418ba80cbd7" (UID: "59dfc847-309b-4f50-8d29-9418ba80cbd7"). InnerVolumeSpecName "kube-api-access-62jf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.219839 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.250033 4814 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59dfc847-309b-4f50-8d29-9418ba80cbd7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.250060 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62jf7\" (UniqueName: \"kubernetes.io/projected/59dfc847-309b-4f50-8d29-9418ba80cbd7-kube-api-access-62jf7\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.267114 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.267171 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.317001 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.324815 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.473516 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.557223 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-combined-ca-bundle\") pod \"2ca029ee-1d79-4f76-bda8-235697a236f4\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.557626 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-ceilometer-tls-certs\") pod \"2ca029ee-1d79-4f76-bda8-235697a236f4\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.557737 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-scripts\") pod \"2ca029ee-1d79-4f76-bda8-235697a236f4\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.557780 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-config-data\") pod \"2ca029ee-1d79-4f76-bda8-235697a236f4\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.557840 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-sg-core-conf-yaml\") pod \"2ca029ee-1d79-4f76-bda8-235697a236f4\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.557907 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-log-httpd\") pod \"2ca029ee-1d79-4f76-bda8-235697a236f4\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.557926 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6tbb\" (UniqueName: \"kubernetes.io/projected/2ca029ee-1d79-4f76-bda8-235697a236f4-kube-api-access-r6tbb\") pod \"2ca029ee-1d79-4f76-bda8-235697a236f4\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.557947 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-run-httpd\") pod \"2ca029ee-1d79-4f76-bda8-235697a236f4\" (UID: \"2ca029ee-1d79-4f76-bda8-235697a236f4\") " Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.559388 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2ca029ee-1d79-4f76-bda8-235697a236f4" (UID: "2ca029ee-1d79-4f76-bda8-235697a236f4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.559577 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2ca029ee-1d79-4f76-bda8-235697a236f4" (UID: "2ca029ee-1d79-4f76-bda8-235697a236f4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.572666 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-scripts" (OuterVolumeSpecName: "scripts") pod "2ca029ee-1d79-4f76-bda8-235697a236f4" (UID: "2ca029ee-1d79-4f76-bda8-235697a236f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.573254 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca029ee-1d79-4f76-bda8-235697a236f4-kube-api-access-r6tbb" (OuterVolumeSpecName: "kube-api-access-r6tbb") pod "2ca029ee-1d79-4f76-bda8-235697a236f4" (UID: "2ca029ee-1d79-4f76-bda8-235697a236f4"). InnerVolumeSpecName "kube-api-access-r6tbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.597288 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.597910 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" containerID="cri-o://66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12" gracePeriod=30 Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.654502 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2ca029ee-1d79-4f76-bda8-235697a236f4" (UID: "2ca029ee-1d79-4f76-bda8-235697a236f4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.660475 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.660518 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6tbb\" (UniqueName: \"kubernetes.io/projected/2ca029ee-1d79-4f76-bda8-235697a236f4-kube-api-access-r6tbb\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.660550 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ca029ee-1d79-4f76-bda8-235697a236f4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.660562 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.660571 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.668372 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2ca029ee-1d79-4f76-bda8-235697a236f4" (UID: "2ca029ee-1d79-4f76-bda8-235697a236f4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.760686 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-config-data" (OuterVolumeSpecName: "config-data") pod "2ca029ee-1d79-4f76-bda8-235697a236f4" (UID: "2ca029ee-1d79-4f76-bda8-235697a236f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.762392 4814 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.762411 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.786907 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ca029ee-1d79-4f76-bda8-235697a236f4" (UID: "2ca029ee-1d79-4f76-bda8-235697a236f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.806814 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e69bc859-f4a3-4e24-92be-cbe76d3faee4","Type":"ContainerStarted","Data":"aca2f7e115438baaf1e5c72988fa308ff247bac1f2612d804e2ff38246aa9f2d"} Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.812692 4814 generic.go:334] "Generic (PLEG): container finished" podID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerID="671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6" exitCode=0 Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.812802 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ca029ee-1d79-4f76-bda8-235697a236f4","Type":"ContainerDied","Data":"671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6"} Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.812868 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ca029ee-1d79-4f76-bda8-235697a236f4","Type":"ContainerDied","Data":"e4f11c2c0f9465a662efcaadc323ba84be4ffdb66c97abfc9471c3ef8dc40612"} Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.812893 4814 scope.go:117] "RemoveContainer" containerID="1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.813174 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.828595 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-g68vm" event={"ID":"59dfc847-309b-4f50-8d29-9418ba80cbd7","Type":"ContainerDied","Data":"7e45d1c6ff8feae053d6d069770da93cb924ddf1e7d566721942e8469e7783e0"} Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.828732 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d25f-account-create-update-p2nqz" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.829560 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-g68vm" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.830081 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e45d1c6ff8feae053d6d069770da93cb924ddf1e7d566721942e8469e7783e0" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.834648 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.839471 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.864216 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ca029ee-1d79-4f76-bda8-235697a236f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.889459 4814 scope.go:117] "RemoveContainer" containerID="ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.900125 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.926123 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.946732 4814 scope.go:117] "RemoveContainer" containerID="4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.949594 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:05 crc kubenswrapper[4814]: E0216 10:09:05.950064 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="ceilometer-central-agent" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950089 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="ceilometer-central-agent" Feb 16 10:09:05 crc kubenswrapper[4814]: E0216 10:09:05.950114 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481f2ffd-8a55-4bb8-bbac-f0862c645d53" containerName="mariadb-account-create-update" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950122 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="481f2ffd-8a55-4bb8-bbac-f0862c645d53" containerName="mariadb-account-create-update" Feb 16 10:09:05 crc kubenswrapper[4814]: E0216 10:09:05.950139 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59dfc847-309b-4f50-8d29-9418ba80cbd7" containerName="mariadb-database-create" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950145 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="59dfc847-309b-4f50-8d29-9418ba80cbd7" containerName="mariadb-database-create" Feb 16 10:09:05 crc kubenswrapper[4814]: E0216 10:09:05.950158 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f31668d-a857-480f-b05a-fa46298ea10e" containerName="mariadb-account-create-update" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950165 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f31668d-a857-480f-b05a-fa46298ea10e" containerName="mariadb-account-create-update" Feb 16 10:09:05 crc kubenswrapper[4814]: E0216 10:09:05.950176 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae29d14a-c8e0-4754-98da-720dd05df22f" containerName="mariadb-account-create-update" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950184 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae29d14a-c8e0-4754-98da-720dd05df22f" containerName="mariadb-account-create-update" Feb 16 10:09:05 crc kubenswrapper[4814]: E0216 10:09:05.950200 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a291623-9f03-4157-b461-a3ece83a7c03" containerName="mariadb-database-create" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950207 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a291623-9f03-4157-b461-a3ece83a7c03" containerName="mariadb-database-create" Feb 16 10:09:05 crc kubenswrapper[4814]: E0216 10:09:05.950223 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47bbc4d-27c9-488a-814c-4223fcdc8c2c" containerName="mariadb-database-create" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950228 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47bbc4d-27c9-488a-814c-4223fcdc8c2c" containerName="mariadb-database-create" Feb 16 10:09:05 crc kubenswrapper[4814]: E0216 10:09:05.950240 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="proxy-httpd" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950246 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="proxy-httpd" Feb 16 10:09:05 crc kubenswrapper[4814]: E0216 10:09:05.950256 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="ceilometer-notification-agent" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950262 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="ceilometer-notification-agent" Feb 16 10:09:05 crc kubenswrapper[4814]: E0216 10:09:05.950284 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="sg-core" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950290 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="sg-core" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950473 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a291623-9f03-4157-b461-a3ece83a7c03" containerName="mariadb-database-create" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950489 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="59dfc847-309b-4f50-8d29-9418ba80cbd7" containerName="mariadb-database-create" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950500 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f31668d-a857-480f-b05a-fa46298ea10e" containerName="mariadb-account-create-update" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950516 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae29d14a-c8e0-4754-98da-720dd05df22f" containerName="mariadb-account-create-update" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950525 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="ceilometer-central-agent" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950550 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47bbc4d-27c9-488a-814c-4223fcdc8c2c" containerName="mariadb-database-create" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950561 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="proxy-httpd" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950570 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="sg-core" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950582 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" containerName="ceilometer-notification-agent" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.950597 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="481f2ffd-8a55-4bb8-bbac-f0862c645d53" containerName="mariadb-account-create-update" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.953199 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.957867 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.957993 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.958372 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.968661 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.993572 4814 scope.go:117] "RemoveContainer" containerID="4040b35c8b4cadff90134b35f7dd8b6c0317d2fb465c9b70ad43b643b800ac05" Feb 16 10:09:05 crc kubenswrapper[4814]: I0216 10:09:05.994768 4814 scope.go:117] "RemoveContainer" containerID="671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.069137 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-scripts\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.069246 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-log-httpd\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.069285 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-run-httpd\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.069310 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9zbc\" (UniqueName: \"kubernetes.io/projected/3f262388-7b11-4e9e-a68e-8dff728750a6-kube-api-access-s9zbc\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.069355 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.069382 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.069415 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-config-data\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.069588 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.171167 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-scripts\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.171800 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-log-httpd\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.171838 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-run-httpd\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.171859 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9zbc\" (UniqueName: \"kubernetes.io/projected/3f262388-7b11-4e9e-a68e-8dff728750a6-kube-api-access-s9zbc\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.171883 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.171901 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.171932 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-config-data\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.172022 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.173270 4814 scope.go:117] "RemoveContainer" containerID="1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.173963 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-log-httpd\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: E0216 10:09:06.175066 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd\": container with ID starting with 1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd not found: ID does not exist" containerID="1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.175153 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd"} err="failed to get container status \"1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd\": rpc error: code = NotFound desc = could not find container \"1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd\": container with ID starting with 1c20b58094b498a0ffd1b60f44ad89de2c10de4a496086a6f36a4e684e117fcd not found: ID does not exist" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.175235 4814 scope.go:117] "RemoveContainer" containerID="ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126" Feb 16 10:09:06 crc kubenswrapper[4814]: E0216 10:09:06.175908 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126\": container with ID starting with ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126 not found: ID does not exist" containerID="ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.175951 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126"} err="failed to get container status \"ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126\": rpc error: code = NotFound desc = could not find container \"ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126\": container with ID starting with ae9bd597cf723e379e786ae462ccc9120c194c72d2a2692aa508ae0184f49126 not found: ID does not exist" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.175986 4814 scope.go:117] "RemoveContainer" containerID="4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.176396 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-run-httpd\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: E0216 10:09:06.178401 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c\": container with ID starting with 4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c not found: ID does not exist" containerID="4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.178443 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c"} err="failed to get container status \"4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c\": rpc error: code = NotFound desc = could not find container \"4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c\": container with ID starting with 4686b28435714ef2be2940e0080fd04121186a1401e3d4bc3aad1fee83ca372c not found: ID does not exist" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.178468 4814 scope.go:117] "RemoveContainer" containerID="671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6" Feb 16 10:09:06 crc kubenswrapper[4814]: E0216 10:09:06.179117 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6\": container with ID starting with 671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6 not found: ID does not exist" containerID="671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.179143 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6"} err="failed to get container status \"671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6\": rpc error: code = NotFound desc = could not find container \"671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6\": container with ID starting with 671cf4d7db5e3f91ab6080a1d0793a18ecb250332a357dfda678d0205be5adf6 not found: ID does not exist" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.182620 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.183397 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-scripts\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.183993 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-config-data\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.184279 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.186319 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.196741 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9zbc\" (UniqueName: \"kubernetes.io/projected/3f262388-7b11-4e9e-a68e-8dff728750a6-kube-api-access-s9zbc\") pod \"ceilometer-0\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.226667 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.582568 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:06 crc kubenswrapper[4814]: W0216 10:09:06.629638 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f262388_7b11_4e9e_a68e_8dff728750a6.slice/crio-0ec38657945f9d48f1338d82c454dc3fd16a77163052ad567d8a860ccf25c28d WatchSource:0}: Error finding container 0ec38657945f9d48f1338d82c454dc3fd16a77163052ad567d8a860ccf25c28d: Status 404 returned error can't find the container with id 0ec38657945f9d48f1338d82c454dc3fd16a77163052ad567d8a860ccf25c28d Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.871356 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e69bc859-f4a3-4e24-92be-cbe76d3faee4","Type":"ContainerStarted","Data":"c119f129c1ef53108cd8bc724e0547db40baa7155ad7eb3cc8814bc2b12b3142"} Feb 16 10:09:06 crc kubenswrapper[4814]: I0216 10:09:06.895329 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f262388-7b11-4e9e-a68e-8dff728750a6","Type":"ContainerStarted","Data":"0ec38657945f9d48f1338d82c454dc3fd16a77163052ad567d8a860ccf25c28d"} Feb 16 10:09:07 crc kubenswrapper[4814]: I0216 10:09:07.022124 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca029ee-1d79-4f76-bda8-235697a236f4" path="/var/lib/kubelet/pods/2ca029ee-1d79-4f76-bda8-235697a236f4/volumes" Feb 16 10:09:07 crc kubenswrapper[4814]: I0216 10:09:07.769885 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:07 crc kubenswrapper[4814]: I0216 10:09:07.916032 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f262388-7b11-4e9e-a68e-8dff728750a6","Type":"ContainerStarted","Data":"c3545b590714a6a77a0458e4c3dc1a9ab3497882a5ec2523759af2a01782fdd6"} Feb 16 10:09:07 crc kubenswrapper[4814]: I0216 10:09:07.916098 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f262388-7b11-4e9e-a68e-8dff728750a6","Type":"ContainerStarted","Data":"5c83870379314a6ff0c1fea9dce424c501532f10a405d1fba3d964e1007ce231"} Feb 16 10:09:07 crc kubenswrapper[4814]: I0216 10:09:07.918139 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e69bc859-f4a3-4e24-92be-cbe76d3faee4","Type":"ContainerStarted","Data":"b4bfba212cc2d6345d9f1b318a6e76e9920febe908ac3a99cbe1c6baa57a9ef8"} Feb 16 10:09:07 crc kubenswrapper[4814]: I0216 10:09:07.925966 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"c9d418761d31a937fe20b0e1ee2ad852a3e779d98c6e91b203a54cb923fbd427"} Feb 16 10:09:07 crc kubenswrapper[4814]: I0216 10:09:07.960397 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:09:07 crc kubenswrapper[4814]: I0216 10:09:07.960495 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:09:07 crc kubenswrapper[4814]: I0216 10:09:07.964364 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.96434326 podStartE2EDuration="4.96434326s" podCreationTimestamp="2026-02-16 10:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:09:07.955105368 +0000 UTC m=+1405.648261548" watchObservedRunningTime="2026-02-16 10:09:07.96434326 +0000 UTC m=+1405.657499440" Feb 16 10:09:08 crc kubenswrapper[4814]: I0216 10:09:08.410294 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 10:09:08 crc kubenswrapper[4814]: I0216 10:09:08.411422 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 10:09:08 crc kubenswrapper[4814]: I0216 10:09:08.731699 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 10:09:09 crc kubenswrapper[4814]: I0216 10:09:09.953711 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f262388-7b11-4e9e-a68e-8dff728750a6","Type":"ContainerStarted","Data":"d3340146196ac89249ae5bc128bf6e477fbb1eacdbcbfbef96a360d497a460d1"} Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.376278 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t7qmf"] Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.378826 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.381689 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.385229 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.388105 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zktlr" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.397078 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t7qmf"] Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.494624 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.494722 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-config-data\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.494871 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhcpl\" (UniqueName: \"kubernetes.io/projected/75398570-0b03-46a0-93b7-84c92628a4d9-kube-api-access-zhcpl\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.495199 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-scripts\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.597682 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-config-data\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.597743 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhcpl\" (UniqueName: \"kubernetes.io/projected/75398570-0b03-46a0-93b7-84c92628a4d9-kube-api-access-zhcpl\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.597825 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-scripts\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.597901 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.606901 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-config-data\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.611455 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.612048 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-scripts\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.627120 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhcpl\" (UniqueName: \"kubernetes.io/projected/75398570-0b03-46a0-93b7-84c92628a4d9-kube-api-access-zhcpl\") pod \"nova-cell0-conductor-db-sync-t7qmf\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:10 crc kubenswrapper[4814]: I0216 10:09:10.709858 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:11 crc kubenswrapper[4814]: W0216 10:09:11.444762 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75398570_0b03_46a0_93b7_84c92628a4d9.slice/crio-a392bbac0484b3d8ca1559c70c35a475b599833f1c1b7d878b758569d02540c0 WatchSource:0}: Error finding container a392bbac0484b3d8ca1559c70c35a475b599833f1c1b7d878b758569d02540c0: Status 404 returned error can't find the container with id a392bbac0484b3d8ca1559c70c35a475b599833f1c1b7d878b758569d02540c0 Feb 16 10:09:11 crc kubenswrapper[4814]: I0216 10:09:11.446905 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t7qmf"] Feb 16 10:09:11 crc kubenswrapper[4814]: I0216 10:09:11.875893 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 10:09:11 crc kubenswrapper[4814]: I0216 10:09:11.960888 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-combined-ca-bundle\") pod \"88895e94-c6c9-4622-b6eb-94982898ac2b\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " Feb 16 10:09:11 crc kubenswrapper[4814]: I0216 10:09:11.961055 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kfvc\" (UniqueName: \"kubernetes.io/projected/88895e94-c6c9-4622-b6eb-94982898ac2b-kube-api-access-7kfvc\") pod \"88895e94-c6c9-4622-b6eb-94982898ac2b\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " Feb 16 10:09:11 crc kubenswrapper[4814]: I0216 10:09:11.961093 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-custom-prometheus-ca\") pod \"88895e94-c6c9-4622-b6eb-94982898ac2b\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " Feb 16 10:09:11 crc kubenswrapper[4814]: I0216 10:09:11.961155 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88895e94-c6c9-4622-b6eb-94982898ac2b-logs\") pod \"88895e94-c6c9-4622-b6eb-94982898ac2b\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " Feb 16 10:09:11 crc kubenswrapper[4814]: I0216 10:09:11.961331 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-config-data\") pod \"88895e94-c6c9-4622-b6eb-94982898ac2b\" (UID: \"88895e94-c6c9-4622-b6eb-94982898ac2b\") " Feb 16 10:09:11 crc kubenswrapper[4814]: I0216 10:09:11.963492 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88895e94-c6c9-4622-b6eb-94982898ac2b-logs" (OuterVolumeSpecName: "logs") pod "88895e94-c6c9-4622-b6eb-94982898ac2b" (UID: "88895e94-c6c9-4622-b6eb-94982898ac2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:09:11 crc kubenswrapper[4814]: I0216 10:09:11.969434 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88895e94-c6c9-4622-b6eb-94982898ac2b-kube-api-access-7kfvc" (OuterVolumeSpecName: "kube-api-access-7kfvc") pod "88895e94-c6c9-4622-b6eb-94982898ac2b" (UID: "88895e94-c6c9-4622-b6eb-94982898ac2b"). InnerVolumeSpecName "kube-api-access-7kfvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.069116 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "88895e94-c6c9-4622-b6eb-94982898ac2b" (UID: "88895e94-c6c9-4622-b6eb-94982898ac2b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.070963 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kfvc\" (UniqueName: \"kubernetes.io/projected/88895e94-c6c9-4622-b6eb-94982898ac2b-kube-api-access-7kfvc\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.070984 4814 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.070994 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88895e94-c6c9-4622-b6eb-94982898ac2b-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.087594 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t7qmf" event={"ID":"75398570-0b03-46a0-93b7-84c92628a4d9","Type":"ContainerStarted","Data":"a392bbac0484b3d8ca1559c70c35a475b599833f1c1b7d878b758569d02540c0"} Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.105142 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88895e94-c6c9-4622-b6eb-94982898ac2b" (UID: "88895e94-c6c9-4622-b6eb-94982898ac2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.138174 4814 generic.go:334] "Generic (PLEG): container finished" podID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerID="66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12" exitCode=0 Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.138690 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"88895e94-c6c9-4622-b6eb-94982898ac2b","Type":"ContainerDied","Data":"66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12"} Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.138806 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"88895e94-c6c9-4622-b6eb-94982898ac2b","Type":"ContainerDied","Data":"57a8bafe3a054b766e387beb450d3e3e8020c8ead5debdb497164c6a64af0918"} Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.138972 4814 scope.go:117] "RemoveContainer" containerID="66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.139268 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.180328 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.187858 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="c9d418761d31a937fe20b0e1ee2ad852a3e779d98c6e91b203a54cb923fbd427" exitCode=0 Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.188318 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"c9d418761d31a937fe20b0e1ee2ad852a3e779d98c6e91b203a54cb923fbd427"} Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.189462 4814 scope.go:117] "RemoveContainer" containerID="c9d418761d31a937fe20b0e1ee2ad852a3e779d98c6e91b203a54cb923fbd427" Feb 16 10:09:12 crc kubenswrapper[4814]: E0216 10:09:12.190299 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.196299 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-config-data" (OuterVolumeSpecName: "config-data") pod "88895e94-c6c9-4622-b6eb-94982898ac2b" (UID: "88895e94-c6c9-4622-b6eb-94982898ac2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.241949 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f262388-7b11-4e9e-a68e-8dff728750a6","Type":"ContainerStarted","Data":"875152b4ed8b5646775f2771b9e28e1b6df63d0058c8e0ee461920915794c291"} Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.243864 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.243782 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="proxy-httpd" containerID="cri-o://875152b4ed8b5646775f2771b9e28e1b6df63d0058c8e0ee461920915794c291" gracePeriod=30 Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.242787 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="ceilometer-central-agent" containerID="cri-o://5c83870379314a6ff0c1fea9dce424c501532f10a405d1fba3d964e1007ce231" gracePeriod=30 Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.243817 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="ceilometer-notification-agent" containerID="cri-o://c3545b590714a6a77a0458e4c3dc1a9ab3497882a5ec2523759af2a01782fdd6" gracePeriod=30 Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.243800 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="sg-core" containerID="cri-o://d3340146196ac89249ae5bc128bf6e477fbb1eacdbcbfbef96a360d497a460d1" gracePeriod=30 Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.247017 4814 scope.go:117] "RemoveContainer" containerID="58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.285121 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88895e94-c6c9-4622-b6eb-94982898ac2b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.292745 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.080437509 podStartE2EDuration="7.29272205s" podCreationTimestamp="2026-02-16 10:09:05 +0000 UTC" firstStartedPulling="2026-02-16 10:09:06.638349506 +0000 UTC m=+1404.331505686" lastFinishedPulling="2026-02-16 10:09:10.850634047 +0000 UTC m=+1408.543790227" observedRunningTime="2026-02-16 10:09:12.284527595 +0000 UTC m=+1409.977683775" watchObservedRunningTime="2026-02-16 10:09:12.29272205 +0000 UTC m=+1409.985878240" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.401919 4814 scope.go:117] "RemoveContainer" containerID="66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12" Feb 16 10:09:12 crc kubenswrapper[4814]: E0216 10:09:12.404838 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12\": container with ID starting with 66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12 not found: ID does not exist" containerID="66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.404919 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12"} err="failed to get container status \"66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12\": rpc error: code = NotFound desc = could not find container \"66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12\": container with ID starting with 66b2d2b37c8a5571a38021d5e0b3d05818f353e060f39d0a1df3139199ff5c12 not found: ID does not exist" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.404967 4814 scope.go:117] "RemoveContainer" containerID="58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353" Feb 16 10:09:12 crc kubenswrapper[4814]: E0216 10:09:12.405401 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353\": container with ID starting with 58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353 not found: ID does not exist" containerID="58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.405424 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353"} err="failed to get container status \"58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353\": rpc error: code = NotFound desc = could not find container \"58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353\": container with ID starting with 58062c2f2ad7c23ecb62231d96c267816ba5c81941644e4eb1e6244ce9b01353 not found: ID does not exist" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.405446 4814 scope.go:117] "RemoveContainer" containerID="4040b35c8b4cadff90134b35f7dd8b6c0317d2fb465c9b70ad43b643b800ac05" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.494561 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.575957 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.586635 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:09:12 crc kubenswrapper[4814]: E0216 10:09:12.587459 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.587483 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:12 crc kubenswrapper[4814]: E0216 10:09:12.587553 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.587562 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:12 crc kubenswrapper[4814]: E0216 10:09:12.587579 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.587589 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.588126 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.588148 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.588157 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.589430 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.592553 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.602454 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.677063 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.677151 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.677164 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.722136 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.722278 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.722319 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.722353 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk7kp\" (UniqueName: \"kubernetes.io/projected/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-kube-api-access-rk7kp\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.722377 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-logs\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.824384 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.824478 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.824550 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk7kp\" (UniqueName: \"kubernetes.io/projected/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-kube-api-access-rk7kp\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.824591 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-logs\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.824690 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.825547 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-logs\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.831980 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-config-data\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.832883 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.837734 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.847688 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk7kp\" (UniqueName: \"kubernetes.io/projected/2b889a9b-aa4c-4e93-92f7-b37c7e86838b-kube-api-access-rk7kp\") pod \"watcher-decision-engine-0\" (UID: \"2b889a9b-aa4c-4e93-92f7-b37c7e86838b\") " pod="openstack/watcher-decision-engine-0" Feb 16 10:09:12 crc kubenswrapper[4814]: I0216 10:09:12.970485 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 16 10:09:13 crc kubenswrapper[4814]: I0216 10:09:13.015802 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" path="/var/lib/kubelet/pods/88895e94-c6c9-4622-b6eb-94982898ac2b/volumes" Feb 16 10:09:13 crc kubenswrapper[4814]: I0216 10:09:13.272097 4814 generic.go:334] "Generic (PLEG): container finished" podID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerID="875152b4ed8b5646775f2771b9e28e1b6df63d0058c8e0ee461920915794c291" exitCode=0 Feb 16 10:09:13 crc kubenswrapper[4814]: I0216 10:09:13.272638 4814 generic.go:334] "Generic (PLEG): container finished" podID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerID="d3340146196ac89249ae5bc128bf6e477fbb1eacdbcbfbef96a360d497a460d1" exitCode=2 Feb 16 10:09:13 crc kubenswrapper[4814]: I0216 10:09:13.272653 4814 generic.go:334] "Generic (PLEG): container finished" podID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerID="c3545b590714a6a77a0458e4c3dc1a9ab3497882a5ec2523759af2a01782fdd6" exitCode=0 Feb 16 10:09:13 crc kubenswrapper[4814]: I0216 10:09:13.272726 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f262388-7b11-4e9e-a68e-8dff728750a6","Type":"ContainerDied","Data":"875152b4ed8b5646775f2771b9e28e1b6df63d0058c8e0ee461920915794c291"} Feb 16 10:09:13 crc kubenswrapper[4814]: I0216 10:09:13.272764 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f262388-7b11-4e9e-a68e-8dff728750a6","Type":"ContainerDied","Data":"d3340146196ac89249ae5bc128bf6e477fbb1eacdbcbfbef96a360d497a460d1"} Feb 16 10:09:13 crc kubenswrapper[4814]: I0216 10:09:13.272779 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f262388-7b11-4e9e-a68e-8dff728750a6","Type":"ContainerDied","Data":"c3545b590714a6a77a0458e4c3dc1a9ab3497882a5ec2523759af2a01782fdd6"} Feb 16 10:09:13 crc kubenswrapper[4814]: I0216 10:09:13.280371 4814 scope.go:117] "RemoveContainer" containerID="c9d418761d31a937fe20b0e1ee2ad852a3e779d98c6e91b203a54cb923fbd427" Feb 16 10:09:13 crc kubenswrapper[4814]: E0216 10:09:13.280826 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:09:13 crc kubenswrapper[4814]: I0216 10:09:13.600011 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 16 10:09:13 crc kubenswrapper[4814]: W0216 10:09:13.607668 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b889a9b_aa4c_4e93_92f7_b37c7e86838b.slice/crio-85912de07d1c1ef346bc1b888c9dd436909cef8c019105c003ddd34ec99c46d4 WatchSource:0}: Error finding container 85912de07d1c1ef346bc1b888c9dd436909cef8c019105c003ddd34ec99c46d4: Status 404 returned error can't find the container with id 85912de07d1c1ef346bc1b888c9dd436909cef8c019105c003ddd34ec99c46d4 Feb 16 10:09:14 crc kubenswrapper[4814]: I0216 10:09:14.292295 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b889a9b-aa4c-4e93-92f7-b37c7e86838b","Type":"ContainerStarted","Data":"d22dfcf3c9aba8846b7626828e27cb080b1ffc4ba6f07d23d5e045b290bfc7f3"} Feb 16 10:09:14 crc kubenswrapper[4814]: I0216 10:09:14.292862 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"2b889a9b-aa4c-4e93-92f7-b37c7e86838b","Type":"ContainerStarted","Data":"85912de07d1c1ef346bc1b888c9dd436909cef8c019105c003ddd34ec99c46d4"} Feb 16 10:09:14 crc kubenswrapper[4814]: I0216 10:09:14.314970 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.314947928 podStartE2EDuration="2.314947928s" podCreationTimestamp="2026-02-16 10:09:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:09:14.311547985 +0000 UTC m=+1412.004704175" watchObservedRunningTime="2026-02-16 10:09:14.314947928 +0000 UTC m=+1412.008104108" Feb 16 10:09:14 crc kubenswrapper[4814]: I0216 10:09:14.338826 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:14 crc kubenswrapper[4814]: I0216 10:09:14.338896 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:14 crc kubenswrapper[4814]: I0216 10:09:14.380781 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:14 crc kubenswrapper[4814]: I0216 10:09:14.438963 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:15 crc kubenswrapper[4814]: I0216 10:09:15.318796 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:15 crc kubenswrapper[4814]: I0216 10:09:15.319337 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:17 crc kubenswrapper[4814]: I0216 10:09:17.339189 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 10:09:17 crc kubenswrapper[4814]: I0216 10:09:17.341665 4814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 10:09:17 crc kubenswrapper[4814]: I0216 10:09:17.883731 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:17 crc kubenswrapper[4814]: I0216 10:09:17.996944 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 10:09:22 crc kubenswrapper[4814]: I0216 10:09:22.971914 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 16 10:09:23 crc kubenswrapper[4814]: I0216 10:09:23.018740 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 16 10:09:23 crc kubenswrapper[4814]: I0216 10:09:23.430106 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 16 10:09:23 crc kubenswrapper[4814]: I0216 10:09:23.471247 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 16 10:09:23 crc kubenswrapper[4814]: I0216 10:09:23.993944 4814 scope.go:117] "RemoveContainer" containerID="c9d418761d31a937fe20b0e1ee2ad852a3e779d98c6e91b203a54cb923fbd427" Feb 16 10:09:23 crc kubenswrapper[4814]: E0216 10:09:23.994292 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:09:24 crc kubenswrapper[4814]: I0216 10:09:24.444296 4814 generic.go:334] "Generic (PLEG): container finished" podID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerID="5c83870379314a6ff0c1fea9dce424c501532f10a405d1fba3d964e1007ce231" exitCode=0 Feb 16 10:09:24 crc kubenswrapper[4814]: I0216 10:09:24.444377 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f262388-7b11-4e9e-a68e-8dff728750a6","Type":"ContainerDied","Data":"5c83870379314a6ff0c1fea9dce424c501532f10a405d1fba3d964e1007ce231"} Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.136464 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.305019 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-log-httpd\") pod \"3f262388-7b11-4e9e-a68e-8dff728750a6\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.305231 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-ceilometer-tls-certs\") pod \"3f262388-7b11-4e9e-a68e-8dff728750a6\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.305366 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-sg-core-conf-yaml\") pod \"3f262388-7b11-4e9e-a68e-8dff728750a6\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.305427 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-config-data\") pod \"3f262388-7b11-4e9e-a68e-8dff728750a6\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.305516 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9zbc\" (UniqueName: \"kubernetes.io/projected/3f262388-7b11-4e9e-a68e-8dff728750a6-kube-api-access-s9zbc\") pod \"3f262388-7b11-4e9e-a68e-8dff728750a6\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.305605 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-scripts\") pod \"3f262388-7b11-4e9e-a68e-8dff728750a6\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.305648 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-combined-ca-bundle\") pod \"3f262388-7b11-4e9e-a68e-8dff728750a6\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.305698 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-run-httpd\") pod \"3f262388-7b11-4e9e-a68e-8dff728750a6\" (UID: \"3f262388-7b11-4e9e-a68e-8dff728750a6\") " Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.306034 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f262388-7b11-4e9e-a68e-8dff728750a6" (UID: "3f262388-7b11-4e9e-a68e-8dff728750a6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.306819 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.308344 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f262388-7b11-4e9e-a68e-8dff728750a6" (UID: "3f262388-7b11-4e9e-a68e-8dff728750a6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.314675 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f262388-7b11-4e9e-a68e-8dff728750a6-kube-api-access-s9zbc" (OuterVolumeSpecName: "kube-api-access-s9zbc") pod "3f262388-7b11-4e9e-a68e-8dff728750a6" (UID: "3f262388-7b11-4e9e-a68e-8dff728750a6"). InnerVolumeSpecName "kube-api-access-s9zbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.315797 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-scripts" (OuterVolumeSpecName: "scripts") pod "3f262388-7b11-4e9e-a68e-8dff728750a6" (UID: "3f262388-7b11-4e9e-a68e-8dff728750a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.352807 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f262388-7b11-4e9e-a68e-8dff728750a6" (UID: "3f262388-7b11-4e9e-a68e-8dff728750a6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.376778 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3f262388-7b11-4e9e-a68e-8dff728750a6" (UID: "3f262388-7b11-4e9e-a68e-8dff728750a6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.419652 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9zbc\" (UniqueName: \"kubernetes.io/projected/3f262388-7b11-4e9e-a68e-8dff728750a6-kube-api-access-s9zbc\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.420081 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.420169 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f262388-7b11-4e9e-a68e-8dff728750a6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.420238 4814 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.420304 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.425821 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f262388-7b11-4e9e-a68e-8dff728750a6" (UID: "3f262388-7b11-4e9e-a68e-8dff728750a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.444889 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-config-data" (OuterVolumeSpecName: "config-data") pod "3f262388-7b11-4e9e-a68e-8dff728750a6" (UID: "3f262388-7b11-4e9e-a68e-8dff728750a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.469626 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t7qmf" event={"ID":"75398570-0b03-46a0-93b7-84c92628a4d9","Type":"ContainerStarted","Data":"31b3173918f7f09321834fd4de5f62557cf8038e7df792e2d0e4d7d89f7bcd26"} Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.474463 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.486737 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f262388-7b11-4e9e-a68e-8dff728750a6","Type":"ContainerDied","Data":"0ec38657945f9d48f1338d82c454dc3fd16a77163052ad567d8a860ccf25c28d"} Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.487054 4814 scope.go:117] "RemoveContainer" containerID="875152b4ed8b5646775f2771b9e28e1b6df63d0058c8e0ee461920915794c291" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.518524 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-t7qmf" podStartSLOduration=2.156409348 podStartE2EDuration="15.518498877s" podCreationTimestamp="2026-02-16 10:09:10 +0000 UTC" firstStartedPulling="2026-02-16 10:09:11.452351971 +0000 UTC m=+1409.145508151" lastFinishedPulling="2026-02-16 10:09:24.8144415 +0000 UTC m=+1422.507597680" observedRunningTime="2026-02-16 10:09:25.515667959 +0000 UTC m=+1423.208824139" watchObservedRunningTime="2026-02-16 10:09:25.518498877 +0000 UTC m=+1423.211655057" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.521960 4814 scope.go:117] "RemoveContainer" containerID="d3340146196ac89249ae5bc128bf6e477fbb1eacdbcbfbef96a360d497a460d1" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.522328 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.523424 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f262388-7b11-4e9e-a68e-8dff728750a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.558141 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.560797 4814 scope.go:117] "RemoveContainer" containerID="c3545b590714a6a77a0458e4c3dc1a9ab3497882a5ec2523759af2a01782fdd6" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.583227 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.592867 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:25 crc kubenswrapper[4814]: E0216 10:09:25.593486 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="ceilometer-notification-agent" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.593506 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="ceilometer-notification-agent" Feb 16 10:09:25 crc kubenswrapper[4814]: E0216 10:09:25.593561 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="proxy-httpd" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.593569 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="proxy-httpd" Feb 16 10:09:25 crc kubenswrapper[4814]: E0216 10:09:25.593582 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="sg-core" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.593592 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="sg-core" Feb 16 10:09:25 crc kubenswrapper[4814]: E0216 10:09:25.593624 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.593632 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:25 crc kubenswrapper[4814]: E0216 10:09:25.593648 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="ceilometer-central-agent" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.593655 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="ceilometer-central-agent" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.593893 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="sg-core" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.593915 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="ceilometer-central-agent" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.593937 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="ceilometer-notification-agent" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.593946 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="88895e94-c6c9-4622-b6eb-94982898ac2b" containerName="watcher-decision-engine" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.593956 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" containerName="proxy-httpd" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.596256 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.600639 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.600907 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.601029 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.605650 4814 scope.go:117] "RemoveContainer" containerID="5c83870379314a6ff0c1fea9dce424c501532f10a405d1fba3d964e1007ce231" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.606359 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.631588 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-log-httpd\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.631667 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.631957 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr8tm\" (UniqueName: \"kubernetes.io/projected/f738c250-3f12-4659-924c-b7645b9f436d-kube-api-access-sr8tm\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.631990 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-scripts\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.632024 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.632058 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-config-data\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.632083 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.632400 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-run-httpd\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.735671 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-log-httpd\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.735739 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.735903 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr8tm\" (UniqueName: \"kubernetes.io/projected/f738c250-3f12-4659-924c-b7645b9f436d-kube-api-access-sr8tm\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.735932 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-scripts\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.735962 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.735981 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-config-data\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.736005 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.736138 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-run-httpd\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.736763 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-log-httpd\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.736802 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-run-httpd\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.741669 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-scripts\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.744858 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.745142 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.745248 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-config-data\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.751707 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.761641 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr8tm\" (UniqueName: \"kubernetes.io/projected/f738c250-3f12-4659-924c-b7645b9f436d-kube-api-access-sr8tm\") pod \"ceilometer-0\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " pod="openstack/ceilometer-0" Feb 16 10:09:25 crc kubenswrapper[4814]: I0216 10:09:25.923411 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:26 crc kubenswrapper[4814]: I0216 10:09:26.495178 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:27 crc kubenswrapper[4814]: I0216 10:09:27.014497 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f262388-7b11-4e9e-a68e-8dff728750a6" path="/var/lib/kubelet/pods/3f262388-7b11-4e9e-a68e-8dff728750a6/volumes" Feb 16 10:09:27 crc kubenswrapper[4814]: I0216 10:09:27.540401 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f738c250-3f12-4659-924c-b7645b9f436d","Type":"ContainerStarted","Data":"a0bb101cb11e26c910eba1b56c7b21d9b621f6b2f636dfda6c5fd8385ad6c199"} Feb 16 10:09:27 crc kubenswrapper[4814]: I0216 10:09:27.541002 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f738c250-3f12-4659-924c-b7645b9f436d","Type":"ContainerStarted","Data":"b7289e8bf37a4138772d9d1aa091380882eba7033e96d202cabf0dd9b26a2fb0"} Feb 16 10:09:27 crc kubenswrapper[4814]: I0216 10:09:27.541020 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f738c250-3f12-4659-924c-b7645b9f436d","Type":"ContainerStarted","Data":"0970c2f932f65c3b5d6a13601410770b88267a58fe922b406da3019f5bc09260"} Feb 16 10:09:27 crc kubenswrapper[4814]: I0216 10:09:27.794987 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:28 crc kubenswrapper[4814]: I0216 10:09:28.558063 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f738c250-3f12-4659-924c-b7645b9f436d","Type":"ContainerStarted","Data":"3b141facc58537ec075c0a74fe1d108c1041a874ce4cfaa8c64524f3dd395631"} Feb 16 10:09:29 crc kubenswrapper[4814]: I0216 10:09:29.582856 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f738c250-3f12-4659-924c-b7645b9f436d","Type":"ContainerStarted","Data":"92ce7f28cc0f33a92386134ff28f2e3869e5a8e4f45a588e7c3ae90879db6ec2"} Feb 16 10:09:29 crc kubenswrapper[4814]: I0216 10:09:29.583452 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="ceilometer-central-agent" containerID="cri-o://b7289e8bf37a4138772d9d1aa091380882eba7033e96d202cabf0dd9b26a2fb0" gracePeriod=30 Feb 16 10:09:29 crc kubenswrapper[4814]: I0216 10:09:29.583633 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="proxy-httpd" containerID="cri-o://92ce7f28cc0f33a92386134ff28f2e3869e5a8e4f45a588e7c3ae90879db6ec2" gracePeriod=30 Feb 16 10:09:29 crc kubenswrapper[4814]: I0216 10:09:29.583719 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="ceilometer-notification-agent" containerID="cri-o://a0bb101cb11e26c910eba1b56c7b21d9b621f6b2f636dfda6c5fd8385ad6c199" gracePeriod=30 Feb 16 10:09:29 crc kubenswrapper[4814]: I0216 10:09:29.583859 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="sg-core" containerID="cri-o://3b141facc58537ec075c0a74fe1d108c1041a874ce4cfaa8c64524f3dd395631" gracePeriod=30 Feb 16 10:09:29 crc kubenswrapper[4814]: I0216 10:09:29.583476 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 10:09:29 crc kubenswrapper[4814]: I0216 10:09:29.623939 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.17223906 podStartE2EDuration="4.623919646s" podCreationTimestamp="2026-02-16 10:09:25 +0000 UTC" firstStartedPulling="2026-02-16 10:09:26.529467847 +0000 UTC m=+1424.222624027" lastFinishedPulling="2026-02-16 10:09:28.981148433 +0000 UTC m=+1426.674304613" observedRunningTime="2026-02-16 10:09:29.622027604 +0000 UTC m=+1427.315183774" watchObservedRunningTime="2026-02-16 10:09:29.623919646 +0000 UTC m=+1427.317075826" Feb 16 10:09:30 crc kubenswrapper[4814]: I0216 10:09:30.598457 4814 generic.go:334] "Generic (PLEG): container finished" podID="f738c250-3f12-4659-924c-b7645b9f436d" containerID="92ce7f28cc0f33a92386134ff28f2e3869e5a8e4f45a588e7c3ae90879db6ec2" exitCode=0 Feb 16 10:09:30 crc kubenswrapper[4814]: I0216 10:09:30.599105 4814 generic.go:334] "Generic (PLEG): container finished" podID="f738c250-3f12-4659-924c-b7645b9f436d" containerID="3b141facc58537ec075c0a74fe1d108c1041a874ce4cfaa8c64524f3dd395631" exitCode=2 Feb 16 10:09:30 crc kubenswrapper[4814]: I0216 10:09:30.599116 4814 generic.go:334] "Generic (PLEG): container finished" podID="f738c250-3f12-4659-924c-b7645b9f436d" containerID="a0bb101cb11e26c910eba1b56c7b21d9b621f6b2f636dfda6c5fd8385ad6c199" exitCode=0 Feb 16 10:09:30 crc kubenswrapper[4814]: I0216 10:09:30.598798 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f738c250-3f12-4659-924c-b7645b9f436d","Type":"ContainerDied","Data":"92ce7f28cc0f33a92386134ff28f2e3869e5a8e4f45a588e7c3ae90879db6ec2"} Feb 16 10:09:30 crc kubenswrapper[4814]: I0216 10:09:30.599158 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f738c250-3f12-4659-924c-b7645b9f436d","Type":"ContainerDied","Data":"3b141facc58537ec075c0a74fe1d108c1041a874ce4cfaa8c64524f3dd395631"} Feb 16 10:09:30 crc kubenswrapper[4814]: I0216 10:09:30.599176 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f738c250-3f12-4659-924c-b7645b9f436d","Type":"ContainerDied","Data":"a0bb101cb11e26c910eba1b56c7b21d9b621f6b2f636dfda6c5fd8385ad6c199"} Feb 16 10:09:34 crc kubenswrapper[4814]: I0216 10:09:34.995078 4814 scope.go:117] "RemoveContainer" containerID="c9d418761d31a937fe20b0e1ee2ad852a3e779d98c6e91b203a54cb923fbd427" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.681356 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"79a802f5d0e64fa7f55fcb2a60ea1203a92e293cf0c6150fa1a770146f12c13d"} Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.701727 4814 generic.go:334] "Generic (PLEG): container finished" podID="f738c250-3f12-4659-924c-b7645b9f436d" containerID="b7289e8bf37a4138772d9d1aa091380882eba7033e96d202cabf0dd9b26a2fb0" exitCode=0 Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.701811 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f738c250-3f12-4659-924c-b7645b9f436d","Type":"ContainerDied","Data":"b7289e8bf37a4138772d9d1aa091380882eba7033e96d202cabf0dd9b26a2fb0"} Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.701858 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f738c250-3f12-4659-924c-b7645b9f436d","Type":"ContainerDied","Data":"0970c2f932f65c3b5d6a13601410770b88267a58fe922b406da3019f5bc09260"} Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.701878 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0970c2f932f65c3b5d6a13601410770b88267a58fe922b406da3019f5bc09260" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.730944 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.828654 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-config-data\") pod \"f738c250-3f12-4659-924c-b7645b9f436d\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.828839 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-combined-ca-bundle\") pod \"f738c250-3f12-4659-924c-b7645b9f436d\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.828946 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-scripts\") pod \"f738c250-3f12-4659-924c-b7645b9f436d\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.828969 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-sg-core-conf-yaml\") pod \"f738c250-3f12-4659-924c-b7645b9f436d\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.829028 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-log-httpd\") pod \"f738c250-3f12-4659-924c-b7645b9f436d\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.829051 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-run-httpd\") pod \"f738c250-3f12-4659-924c-b7645b9f436d\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.829375 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-ceilometer-tls-certs\") pod \"f738c250-3f12-4659-924c-b7645b9f436d\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.829963 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f738c250-3f12-4659-924c-b7645b9f436d" (UID: "f738c250-3f12-4659-924c-b7645b9f436d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.830150 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f738c250-3f12-4659-924c-b7645b9f436d" (UID: "f738c250-3f12-4659-924c-b7645b9f436d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.830639 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.830665 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f738c250-3f12-4659-924c-b7645b9f436d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.836011 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-scripts" (OuterVolumeSpecName: "scripts") pod "f738c250-3f12-4659-924c-b7645b9f436d" (UID: "f738c250-3f12-4659-924c-b7645b9f436d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.879354 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f738c250-3f12-4659-924c-b7645b9f436d" (UID: "f738c250-3f12-4659-924c-b7645b9f436d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.896082 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f738c250-3f12-4659-924c-b7645b9f436d" (UID: "f738c250-3f12-4659-924c-b7645b9f436d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.931908 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr8tm\" (UniqueName: \"kubernetes.io/projected/f738c250-3f12-4659-924c-b7645b9f436d-kube-api-access-sr8tm\") pod \"f738c250-3f12-4659-924c-b7645b9f436d\" (UID: \"f738c250-3f12-4659-924c-b7645b9f436d\") " Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.932808 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f738c250-3f12-4659-924c-b7645b9f436d" (UID: "f738c250-3f12-4659-924c-b7645b9f436d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.933685 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.933716 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.933730 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.933742 4814 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.935779 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f738c250-3f12-4659-924c-b7645b9f436d-kube-api-access-sr8tm" (OuterVolumeSpecName: "kube-api-access-sr8tm") pod "f738c250-3f12-4659-924c-b7645b9f436d" (UID: "f738c250-3f12-4659-924c-b7645b9f436d"). InnerVolumeSpecName "kube-api-access-sr8tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:36 crc kubenswrapper[4814]: I0216 10:09:36.960148 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-config-data" (OuterVolumeSpecName: "config-data") pod "f738c250-3f12-4659-924c-b7645b9f436d" (UID: "f738c250-3f12-4659-924c-b7645b9f436d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.035586 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr8tm\" (UniqueName: \"kubernetes.io/projected/f738c250-3f12-4659-924c-b7645b9f436d-kube-api-access-sr8tm\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.035621 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f738c250-3f12-4659-924c-b7645b9f436d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.677093 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.713563 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.745442 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.758527 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.785742 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:37 crc kubenswrapper[4814]: E0216 10:09:37.786300 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="ceilometer-central-agent" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.786379 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="ceilometer-central-agent" Feb 16 10:09:37 crc kubenswrapper[4814]: E0216 10:09:37.786404 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="ceilometer-notification-agent" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.786412 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="ceilometer-notification-agent" Feb 16 10:09:37 crc kubenswrapper[4814]: E0216 10:09:37.786472 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="proxy-httpd" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.786496 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="proxy-httpd" Feb 16 10:09:37 crc kubenswrapper[4814]: E0216 10:09:37.786559 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="sg-core" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.786566 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="sg-core" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.786767 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="ceilometer-notification-agent" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.786789 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="proxy-httpd" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.786804 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="ceilometer-central-agent" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.786814 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f738c250-3f12-4659-924c-b7645b9f436d" containerName="sg-core" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.788638 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.790694 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.791349 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.798019 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.857980 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.957747 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-scripts\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.957858 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-log-httpd\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.957904 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.957939 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbghn\" (UniqueName: \"kubernetes.io/projected/a57872d9-3772-4b6b-b87b-543531bff0d7-kube-api-access-vbghn\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.958111 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-config-data\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.958168 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-run-httpd\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.958251 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.958683 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.960627 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:09:37 crc kubenswrapper[4814]: I0216 10:09:37.960684 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.062055 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.062199 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.062476 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-scripts\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.062527 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-log-httpd\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.062612 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.062658 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbghn\" (UniqueName: \"kubernetes.io/projected/a57872d9-3772-4b6b-b87b-543531bff0d7-kube-api-access-vbghn\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.062787 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-config-data\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.062843 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-run-httpd\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.063462 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-run-httpd\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.063682 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-log-httpd\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.067522 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.068472 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.086912 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.087970 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-scripts\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.088105 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-config-data\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.091234 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbghn\" (UniqueName: \"kubernetes.io/projected/a57872d9-3772-4b6b-b87b-543531bff0d7-kube-api-access-vbghn\") pod \"ceilometer-0\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.114604 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.625636 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:09:38 crc kubenswrapper[4814]: I0216 10:09:38.748261 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a57872d9-3772-4b6b-b87b-543531bff0d7","Type":"ContainerStarted","Data":"6984427bf844eebc733fa300dd66d33e0b6616507281e3d8121632617b8fe8c3"} Feb 16 10:09:39 crc kubenswrapper[4814]: I0216 10:09:39.005055 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f738c250-3f12-4659-924c-b7645b9f436d" path="/var/lib/kubelet/pods/f738c250-3f12-4659-924c-b7645b9f436d/volumes" Feb 16 10:09:39 crc kubenswrapper[4814]: I0216 10:09:39.760900 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a57872d9-3772-4b6b-b87b-543531bff0d7","Type":"ContainerStarted","Data":"134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791"} Feb 16 10:09:39 crc kubenswrapper[4814]: I0216 10:09:39.764394 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="79a802f5d0e64fa7f55fcb2a60ea1203a92e293cf0c6150fa1a770146f12c13d" exitCode=0 Feb 16 10:09:39 crc kubenswrapper[4814]: I0216 10:09:39.764428 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"79a802f5d0e64fa7f55fcb2a60ea1203a92e293cf0c6150fa1a770146f12c13d"} Feb 16 10:09:39 crc kubenswrapper[4814]: I0216 10:09:39.764456 4814 scope.go:117] "RemoveContainer" containerID="c9d418761d31a937fe20b0e1ee2ad852a3e779d98c6e91b203a54cb923fbd427" Feb 16 10:09:39 crc kubenswrapper[4814]: I0216 10:09:39.765252 4814 scope.go:117] "RemoveContainer" containerID="79a802f5d0e64fa7f55fcb2a60ea1203a92e293cf0c6150fa1a770146f12c13d" Feb 16 10:09:39 crc kubenswrapper[4814]: E0216 10:09:39.765598 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:09:40 crc kubenswrapper[4814]: I0216 10:09:40.777447 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a57872d9-3772-4b6b-b87b-543531bff0d7","Type":"ContainerStarted","Data":"e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812"} Feb 16 10:09:41 crc kubenswrapper[4814]: I0216 10:09:41.793070 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a57872d9-3772-4b6b-b87b-543531bff0d7","Type":"ContainerStarted","Data":"1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250"} Feb 16 10:09:42 crc kubenswrapper[4814]: I0216 10:09:42.676833 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:09:42 crc kubenswrapper[4814]: I0216 10:09:42.676896 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:09:42 crc kubenswrapper[4814]: I0216 10:09:42.677778 4814 scope.go:117] "RemoveContainer" containerID="79a802f5d0e64fa7f55fcb2a60ea1203a92e293cf0c6150fa1a770146f12c13d" Feb 16 10:09:42 crc kubenswrapper[4814]: E0216 10:09:42.678150 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:09:43 crc kubenswrapper[4814]: I0216 10:09:43.817121 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a57872d9-3772-4b6b-b87b-543531bff0d7","Type":"ContainerStarted","Data":"ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba"} Feb 16 10:09:43 crc kubenswrapper[4814]: I0216 10:09:43.817981 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 10:09:43 crc kubenswrapper[4814]: I0216 10:09:43.847623 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.542666824 podStartE2EDuration="6.847602607s" podCreationTimestamp="2026-02-16 10:09:37 +0000 UTC" firstStartedPulling="2026-02-16 10:09:38.634037755 +0000 UTC m=+1436.327193935" lastFinishedPulling="2026-02-16 10:09:42.938973528 +0000 UTC m=+1440.632129718" observedRunningTime="2026-02-16 10:09:43.841661107 +0000 UTC m=+1441.534817287" watchObservedRunningTime="2026-02-16 10:09:43.847602607 +0000 UTC m=+1441.540758787" Feb 16 10:09:47 crc kubenswrapper[4814]: I0216 10:09:47.863790 4814 generic.go:334] "Generic (PLEG): container finished" podID="75398570-0b03-46a0-93b7-84c92628a4d9" containerID="31b3173918f7f09321834fd4de5f62557cf8038e7df792e2d0e4d7d89f7bcd26" exitCode=0 Feb 16 10:09:47 crc kubenswrapper[4814]: I0216 10:09:47.864142 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t7qmf" event={"ID":"75398570-0b03-46a0-93b7-84c92628a4d9","Type":"ContainerDied","Data":"31b3173918f7f09321834fd4de5f62557cf8038e7df792e2d0e4d7d89f7bcd26"} Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.301729 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.491178 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhcpl\" (UniqueName: \"kubernetes.io/projected/75398570-0b03-46a0-93b7-84c92628a4d9-kube-api-access-zhcpl\") pod \"75398570-0b03-46a0-93b7-84c92628a4d9\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.491325 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-config-data\") pod \"75398570-0b03-46a0-93b7-84c92628a4d9\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.491488 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-combined-ca-bundle\") pod \"75398570-0b03-46a0-93b7-84c92628a4d9\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.491513 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-scripts\") pod \"75398570-0b03-46a0-93b7-84c92628a4d9\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.496549 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-scripts" (OuterVolumeSpecName: "scripts") pod "75398570-0b03-46a0-93b7-84c92628a4d9" (UID: "75398570-0b03-46a0-93b7-84c92628a4d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.503183 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75398570-0b03-46a0-93b7-84c92628a4d9-kube-api-access-zhcpl" (OuterVolumeSpecName: "kube-api-access-zhcpl") pod "75398570-0b03-46a0-93b7-84c92628a4d9" (UID: "75398570-0b03-46a0-93b7-84c92628a4d9"). InnerVolumeSpecName "kube-api-access-zhcpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:09:49 crc kubenswrapper[4814]: E0216 10:09:49.516074 4814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-config-data podName:75398570-0b03-46a0-93b7-84c92628a4d9 nodeName:}" failed. No retries permitted until 2026-02-16 10:09:50.016015714 +0000 UTC m=+1447.709171894 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-config-data") pod "75398570-0b03-46a0-93b7-84c92628a4d9" (UID: "75398570-0b03-46a0-93b7-84c92628a4d9") : error deleting /var/lib/kubelet/pods/75398570-0b03-46a0-93b7-84c92628a4d9/volume-subpaths: remove /var/lib/kubelet/pods/75398570-0b03-46a0-93b7-84c92628a4d9/volume-subpaths: no such file or directory Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.519113 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75398570-0b03-46a0-93b7-84c92628a4d9" (UID: "75398570-0b03-46a0-93b7-84c92628a4d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.594416 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.594660 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.594722 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhcpl\" (UniqueName: \"kubernetes.io/projected/75398570-0b03-46a0-93b7-84c92628a4d9-kube-api-access-zhcpl\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.888926 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t7qmf" event={"ID":"75398570-0b03-46a0-93b7-84c92628a4d9","Type":"ContainerDied","Data":"a392bbac0484b3d8ca1559c70c35a475b599833f1c1b7d878b758569d02540c0"} Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.888967 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a392bbac0484b3d8ca1559c70c35a475b599833f1c1b7d878b758569d02540c0" Feb 16 10:09:49 crc kubenswrapper[4814]: I0216 10:09:49.888985 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t7qmf" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.030927 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 10:09:50 crc kubenswrapper[4814]: E0216 10:09:50.031706 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75398570-0b03-46a0-93b7-84c92628a4d9" containerName="nova-cell0-conductor-db-sync" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.031782 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="75398570-0b03-46a0-93b7-84c92628a4d9" containerName="nova-cell0-conductor-db-sync" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.032091 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="75398570-0b03-46a0-93b7-84c92628a4d9" containerName="nova-cell0-conductor-db-sync" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.032979 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.055382 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.104906 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-config-data\") pod \"75398570-0b03-46a0-93b7-84c92628a4d9\" (UID: \"75398570-0b03-46a0-93b7-84c92628a4d9\") " Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.105203 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6pdh\" (UniqueName: \"kubernetes.io/projected/ff9316e8-e703-4057-8e8f-f01ac439748d-kube-api-access-h6pdh\") pod \"nova-cell0-conductor-0\" (UID: \"ff9316e8-e703-4057-8e8f-f01ac439748d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.105241 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9316e8-e703-4057-8e8f-f01ac439748d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff9316e8-e703-4057-8e8f-f01ac439748d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.105361 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff9316e8-e703-4057-8e8f-f01ac439748d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff9316e8-e703-4057-8e8f-f01ac439748d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.113368 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-config-data" (OuterVolumeSpecName: "config-data") pod "75398570-0b03-46a0-93b7-84c92628a4d9" (UID: "75398570-0b03-46a0-93b7-84c92628a4d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.206398 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff9316e8-e703-4057-8e8f-f01ac439748d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff9316e8-e703-4057-8e8f-f01ac439748d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.206468 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6pdh\" (UniqueName: \"kubernetes.io/projected/ff9316e8-e703-4057-8e8f-f01ac439748d-kube-api-access-h6pdh\") pod \"nova-cell0-conductor-0\" (UID: \"ff9316e8-e703-4057-8e8f-f01ac439748d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.206498 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9316e8-e703-4057-8e8f-f01ac439748d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff9316e8-e703-4057-8e8f-f01ac439748d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.206698 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75398570-0b03-46a0-93b7-84c92628a4d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.210996 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff9316e8-e703-4057-8e8f-f01ac439748d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ff9316e8-e703-4057-8e8f-f01ac439748d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.211554 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff9316e8-e703-4057-8e8f-f01ac439748d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ff9316e8-e703-4057-8e8f-f01ac439748d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.226852 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6pdh\" (UniqueName: \"kubernetes.io/projected/ff9316e8-e703-4057-8e8f-f01ac439748d-kube-api-access-h6pdh\") pod \"nova-cell0-conductor-0\" (UID: \"ff9316e8-e703-4057-8e8f-f01ac439748d\") " pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.360883 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.814722 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 10:09:50 crc kubenswrapper[4814]: I0216 10:09:50.902074 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff9316e8-e703-4057-8e8f-f01ac439748d","Type":"ContainerStarted","Data":"dde2e293e08e390e75af8b4f2442f55cc3315f20b55f1aa5b6fc05aa3a518012"} Feb 16 10:09:51 crc kubenswrapper[4814]: I0216 10:09:51.926467 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ff9316e8-e703-4057-8e8f-f01ac439748d","Type":"ContainerStarted","Data":"51bca114a175b13df55a09d04ce2bdfd3bcf709fbabd0b97dac009e791031a0d"} Feb 16 10:09:51 crc kubenswrapper[4814]: I0216 10:09:51.927120 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 10:09:51 crc kubenswrapper[4814]: I0216 10:09:51.975816 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.975786346 podStartE2EDuration="2.975786346s" podCreationTimestamp="2026-02-16 10:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:09:51.958772028 +0000 UTC m=+1449.651928208" watchObservedRunningTime="2026-02-16 10:09:51.975786346 +0000 UTC m=+1449.668942516" Feb 16 10:09:53 crc kubenswrapper[4814]: I0216 10:09:53.002866 4814 scope.go:117] "RemoveContainer" containerID="79a802f5d0e64fa7f55fcb2a60ea1203a92e293cf0c6150fa1a770146f12c13d" Feb 16 10:09:53 crc kubenswrapper[4814]: E0216 10:09:53.003970 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:10:00 crc kubenswrapper[4814]: I0216 10:10:00.394359 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 10:10:00 crc kubenswrapper[4814]: I0216 10:10:00.920993 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xjvcb"] Feb 16 10:10:00 crc kubenswrapper[4814]: I0216 10:10:00.922995 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:00 crc kubenswrapper[4814]: I0216 10:10:00.930887 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 10:10:00 crc kubenswrapper[4814]: I0216 10:10:00.930900 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 10:10:00 crc kubenswrapper[4814]: I0216 10:10:00.933492 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xjvcb"] Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.073826 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-config-data\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.074050 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-scripts\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.074090 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.074273 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq49h\" (UniqueName: \"kubernetes.io/projected/f4cc5476-cf44-45e0-877d-85494accff3c-kube-api-access-vq49h\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.178977 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq49h\" (UniqueName: \"kubernetes.io/projected/f4cc5476-cf44-45e0-877d-85494accff3c-kube-api-access-vq49h\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.179088 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-config-data\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.179227 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-scripts\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.179264 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.203515 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-config-data\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.206141 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-scripts\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.212216 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.241164 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq49h\" (UniqueName: \"kubernetes.io/projected/f4cc5476-cf44-45e0-877d-85494accff3c-kube-api-access-vq49h\") pod \"nova-cell0-cell-mapping-xjvcb\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.252022 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.316605 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.318575 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.325175 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.363915 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.366116 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.386858 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b5d03d-bffa-4ea3-afd3-59beb082d855-logs\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.387125 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.387231 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfdc8\" (UniqueName: \"kubernetes.io/projected/e9b5d03d-bffa-4ea3-afd3-59beb082d855-kube-api-access-zfdc8\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.387319 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-config-data\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.396898 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.408447 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.447482 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.466555 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.479438 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.491343 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.491460 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b5d03d-bffa-4ea3-afd3-59beb082d855-logs\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.491484 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.491508 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zxct\" (UniqueName: \"kubernetes.io/projected/ec7b7475-afbe-4248-8469-1cacc110749a-kube-api-access-8zxct\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.491578 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.491615 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfdc8\" (UniqueName: \"kubernetes.io/projected/e9b5d03d-bffa-4ea3-afd3-59beb082d855-kube-api-access-zfdc8\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.491664 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-config-data\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.500891 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b5d03d-bffa-4ea3-afd3-59beb082d855-logs\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.524688 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.530645 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.535981 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-config-data\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.547416 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.598206 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfdc8\" (UniqueName: \"kubernetes.io/projected/e9b5d03d-bffa-4ea3-afd3-59beb082d855-kube-api-access-zfdc8\") pod \"nova-api-0\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.603175 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.603411 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqdw2\" (UniqueName: \"kubernetes.io/projected/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-kube-api-access-lqdw2\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.603525 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zxct\" (UniqueName: \"kubernetes.io/projected/ec7b7475-afbe-4248-8469-1cacc110749a-kube-api-access-8zxct\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.603585 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.603675 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.639498 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.642364 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.644151 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-logs\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.644175 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-config-data\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.674744 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zxct\" (UniqueName: \"kubernetes.io/projected/ec7b7475-afbe-4248-8469-1cacc110749a-kube-api-access-8zxct\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.695594 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.697386 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.717163 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.749301 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqdw2\" (UniqueName: \"kubernetes.io/projected/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-kube-api-access-lqdw2\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.749943 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.750051 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-logs\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.750075 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-config-data\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.758701 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-logs\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.767005 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.776354 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.787119 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-config-data\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.808479 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqdw2\" (UniqueName: \"kubernetes.io/projected/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-kube-api-access-lqdw2\") pod \"nova-metadata-0\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.828929 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76d8bd6559-6rc56"] Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.842825 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.852802 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-config-data\") pod \"nova-scheduler-0\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.852967 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwv64\" (UniqueName: \"kubernetes.io/projected/00da9365-eada-4d4d-8edf-636919e9d54d-kube-api-access-bwv64\") pod \"nova-scheduler-0\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.853027 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.861621 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76d8bd6559-6rc56"] Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.877094 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.904480 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.955425 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-sb\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.955495 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-svc\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.955524 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-nb\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.955596 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwv64\" (UniqueName: \"kubernetes.io/projected/00da9365-eada-4d4d-8edf-636919e9d54d-kube-api-access-bwv64\") pod \"nova-scheduler-0\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.955625 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxkf\" (UniqueName: \"kubernetes.io/projected/cb4668be-2f35-40f3-b565-5e2870feba0f-kube-api-access-llxkf\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.955703 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-swift-storage-0\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.955742 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.955759 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-config\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.955822 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-config-data\") pod \"nova-scheduler-0\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.962282 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-config-data\") pod \"nova-scheduler-0\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.962794 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.964954 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:01 crc kubenswrapper[4814]: I0216 10:10:01.974863 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwv64\" (UniqueName: \"kubernetes.io/projected/00da9365-eada-4d4d-8edf-636919e9d54d-kube-api-access-bwv64\") pod \"nova-scheduler-0\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.058982 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-config\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.059416 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-sb\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.059509 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-svc\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.059812 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-nb\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.059966 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llxkf\" (UniqueName: \"kubernetes.io/projected/cb4668be-2f35-40f3-b565-5e2870feba0f-kube-api-access-llxkf\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.065094 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-swift-storage-0\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.067478 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-sb\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.067488 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-svc\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.070146 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-config\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.071451 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-nb\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.075655 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.077510 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-swift-storage-0\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.089605 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llxkf\" (UniqueName: \"kubernetes.io/projected/cb4668be-2f35-40f3-b565-5e2870feba0f-kube-api-access-llxkf\") pod \"dnsmasq-dns-76d8bd6559-6rc56\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.191985 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.372042 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xjvcb"] Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.631637 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.675852 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.784666 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-grjp4"] Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.786950 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.794699 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.794801 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.796888 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-grjp4"] Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.848692 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-scripts\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.848756 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.849055 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkmz6\" (UniqueName: \"kubernetes.io/projected/24af27d6-b9ee-4abc-b460-7633eb556cd7-kube-api-access-gkmz6\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.849376 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-config-data\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.951097 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-scripts\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.951166 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.951226 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkmz6\" (UniqueName: \"kubernetes.io/projected/24af27d6-b9ee-4abc-b460-7633eb556cd7-kube-api-access-gkmz6\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.951301 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-config-data\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.963819 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.964677 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.965545 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-config-data\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.965639 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-scripts\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.976273 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkmz6\" (UniqueName: \"kubernetes.io/projected/24af27d6-b9ee-4abc-b460-7633eb556cd7-kube-api-access-gkmz6\") pod \"nova-cell1-conductor-db-sync-grjp4\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:02 crc kubenswrapper[4814]: I0216 10:10:02.976467 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:10:03 crc kubenswrapper[4814]: W0216 10:10:03.005470 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bbce985_e5b4_499a_af2b_8fd36ab9e13e.slice/crio-625012abb55eb3d7d5a75039db38b1d274668c0c73fa3ccd23cad23aaceb0b85 WatchSource:0}: Error finding container 625012abb55eb3d7d5a75039db38b1d274668c0c73fa3ccd23cad23aaceb0b85: Status 404 returned error can't find the container with id 625012abb55eb3d7d5a75039db38b1d274668c0c73fa3ccd23cad23aaceb0b85 Feb 16 10:10:03 crc kubenswrapper[4814]: I0216 10:10:03.069586 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xjvcb" event={"ID":"f4cc5476-cf44-45e0-877d-85494accff3c","Type":"ContainerStarted","Data":"9b17d764852f065060f7a58d165a60d709fd5339df426c0f16a6f192aed24a6a"} Feb 16 10:10:03 crc kubenswrapper[4814]: I0216 10:10:03.069726 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xjvcb" event={"ID":"f4cc5476-cf44-45e0-877d-85494accff3c","Type":"ContainerStarted","Data":"c8f37e73fdb5547f95bebf0e79f43f8a2fb5a81de672e40051d868840b2f33cb"} Feb 16 10:10:03 crc kubenswrapper[4814]: I0216 10:10:03.071174 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00da9365-eada-4d4d-8edf-636919e9d54d","Type":"ContainerStarted","Data":"b690d7eb9c6bedf72745f6989245b5e45c3824bb8e5596b93ff6a849f30e74d3"} Feb 16 10:10:03 crc kubenswrapper[4814]: I0216 10:10:03.073981 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9b5d03d-bffa-4ea3-afd3-59beb082d855","Type":"ContainerStarted","Data":"33a250cb2c5fda3a6acd7c17a888c239a91669ee4e485cf23117ff6bb878b5e4"} Feb 16 10:10:03 crc kubenswrapper[4814]: I0216 10:10:03.088848 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bbce985-e5b4-499a-af2b-8fd36ab9e13e","Type":"ContainerStarted","Data":"625012abb55eb3d7d5a75039db38b1d274668c0c73fa3ccd23cad23aaceb0b85"} Feb 16 10:10:03 crc kubenswrapper[4814]: I0216 10:10:03.096422 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec7b7475-afbe-4248-8469-1cacc110749a","Type":"ContainerStarted","Data":"2eccda878cce121a4af97bab0c0d6f7fe13cd6dbba48de4e94f367c02d3cea33"} Feb 16 10:10:03 crc kubenswrapper[4814]: I0216 10:10:03.115256 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:03 crc kubenswrapper[4814]: I0216 10:10:03.149925 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76d8bd6559-6rc56"] Feb 16 10:10:03 crc kubenswrapper[4814]: I0216 10:10:03.163404 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xjvcb" podStartSLOduration=3.163380745 podStartE2EDuration="3.163380745s" podCreationTimestamp="2026-02-16 10:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:10:03.161002961 +0000 UTC m=+1460.854159141" watchObservedRunningTime="2026-02-16 10:10:03.163380745 +0000 UTC m=+1460.856536925" Feb 16 10:10:03 crc kubenswrapper[4814]: I0216 10:10:03.838429 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-grjp4"] Feb 16 10:10:04 crc kubenswrapper[4814]: I0216 10:10:04.122853 4814 generic.go:334] "Generic (PLEG): container finished" podID="cb4668be-2f35-40f3-b565-5e2870feba0f" containerID="a4533cf677dba205f1de5578f8320b09c655bf8a777830aa1ee151cb3ba0babe" exitCode=0 Feb 16 10:10:04 crc kubenswrapper[4814]: I0216 10:10:04.124002 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" event={"ID":"cb4668be-2f35-40f3-b565-5e2870feba0f","Type":"ContainerDied","Data":"a4533cf677dba205f1de5578f8320b09c655bf8a777830aa1ee151cb3ba0babe"} Feb 16 10:10:04 crc kubenswrapper[4814]: I0216 10:10:04.124038 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" event={"ID":"cb4668be-2f35-40f3-b565-5e2870feba0f","Type":"ContainerStarted","Data":"f72dd998448bfa3a35d6648f36fe01bac71f057df5e21d3a00a6dcc717afc8e4"} Feb 16 10:10:05 crc kubenswrapper[4814]: I0216 10:10:05.422474 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 10:10:05 crc kubenswrapper[4814]: I0216 10:10:05.447722 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:10:06 crc kubenswrapper[4814]: I0216 10:10:06.147466 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-grjp4" event={"ID":"24af27d6-b9ee-4abc-b460-7633eb556cd7","Type":"ContainerStarted","Data":"9a93599a78da67ef7d55defcf6a49b659efbd94258a8f6ce287ee2d43d20f0fd"} Feb 16 10:10:06 crc kubenswrapper[4814]: I0216 10:10:06.994115 4814 scope.go:117] "RemoveContainer" containerID="79a802f5d0e64fa7f55fcb2a60ea1203a92e293cf0c6150fa1a770146f12c13d" Feb 16 10:10:06 crc kubenswrapper[4814]: E0216 10:10:06.995803 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.161243 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-grjp4" event={"ID":"24af27d6-b9ee-4abc-b460-7633eb556cd7","Type":"ContainerStarted","Data":"e05fdfec37ca4b0c4b83e8ab45db009e22805b6d5e149726ed96819c763fc9a3"} Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.164711 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9b5d03d-bffa-4ea3-afd3-59beb082d855","Type":"ContainerStarted","Data":"e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140"} Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.164744 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9b5d03d-bffa-4ea3-afd3-59beb082d855","Type":"ContainerStarted","Data":"21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf"} Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.168012 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bbce985-e5b4-499a-af2b-8fd36ab9e13e","Type":"ContainerStarted","Data":"7e6e11f0f3b0f1dda117185329ebc42b8e72c57cb26834e47dde9887597eb3ef"} Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.168059 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bbce985-e5b4-499a-af2b-8fd36ab9e13e","Type":"ContainerStarted","Data":"e06f9f390b0936e1827309c1bc5defe17169372afffe9c888c4ca1cfa15d28f9"} Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.168570 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6bbce985-e5b4-499a-af2b-8fd36ab9e13e" containerName="nova-metadata-log" containerID="cri-o://e06f9f390b0936e1827309c1bc5defe17169372afffe9c888c4ca1cfa15d28f9" gracePeriod=30 Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.168660 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6bbce985-e5b4-499a-af2b-8fd36ab9e13e" containerName="nova-metadata-metadata" containerID="cri-o://7e6e11f0f3b0f1dda117185329ebc42b8e72c57cb26834e47dde9887597eb3ef" gracePeriod=30 Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.171399 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" event={"ID":"cb4668be-2f35-40f3-b565-5e2870feba0f","Type":"ContainerStarted","Data":"6bfbcf62ee3d4fe0743ee965b85aaee24eb46090318572760e8a78caf1b65f47"} Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.171618 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.176644 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec7b7475-afbe-4248-8469-1cacc110749a","Type":"ContainerStarted","Data":"0fa0902f78b650e460a008de88f8b3d05fb45bb842c8ed823885fe75e2238e10"} Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.176824 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ec7b7475-afbe-4248-8469-1cacc110749a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0fa0902f78b650e460a008de88f8b3d05fb45bb842c8ed823885fe75e2238e10" gracePeriod=30 Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.182645 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-grjp4" podStartSLOduration=5.182625635 podStartE2EDuration="5.182625635s" podCreationTimestamp="2026-02-16 10:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:10:07.177271931 +0000 UTC m=+1464.870428121" watchObservedRunningTime="2026-02-16 10:10:07.182625635 +0000 UTC m=+1464.875781815" Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.184270 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00da9365-eada-4d4d-8edf-636919e9d54d","Type":"ContainerStarted","Data":"753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b"} Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.209569 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.4964511160000002 podStartE2EDuration="6.209515781s" podCreationTimestamp="2026-02-16 10:10:01 +0000 UTC" firstStartedPulling="2026-02-16 10:10:02.678480621 +0000 UTC m=+1460.371636801" lastFinishedPulling="2026-02-16 10:10:06.391545286 +0000 UTC m=+1464.084701466" observedRunningTime="2026-02-16 10:10:07.200772235 +0000 UTC m=+1464.893928425" watchObservedRunningTime="2026-02-16 10:10:07.209515781 +0000 UTC m=+1464.902671961" Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.232130 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.51583765 podStartE2EDuration="6.23210908s" podCreationTimestamp="2026-02-16 10:10:01 +0000 UTC" firstStartedPulling="2026-02-16 10:10:02.679762546 +0000 UTC m=+1460.372918726" lastFinishedPulling="2026-02-16 10:10:06.396033976 +0000 UTC m=+1464.089190156" observedRunningTime="2026-02-16 10:10:07.222165761 +0000 UTC m=+1464.915321951" watchObservedRunningTime="2026-02-16 10:10:07.23210908 +0000 UTC m=+1464.925265260" Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.244678 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.834029798 podStartE2EDuration="6.244634167s" podCreationTimestamp="2026-02-16 10:10:01 +0000 UTC" firstStartedPulling="2026-02-16 10:10:03.016407412 +0000 UTC m=+1460.709563592" lastFinishedPulling="2026-02-16 10:10:06.427011781 +0000 UTC m=+1464.120167961" observedRunningTime="2026-02-16 10:10:07.24178264 +0000 UTC m=+1464.934938840" watchObservedRunningTime="2026-02-16 10:10:07.244634167 +0000 UTC m=+1464.937790347" Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.272209 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" podStartSLOduration=6.27218999 podStartE2EDuration="6.27218999s" podCreationTimestamp="2026-02-16 10:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:10:07.269106727 +0000 UTC m=+1464.962262907" watchObservedRunningTime="2026-02-16 10:10:07.27218999 +0000 UTC m=+1464.965346170" Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.305632 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.872188086 podStartE2EDuration="6.305611721s" podCreationTimestamp="2026-02-16 10:10:01 +0000 UTC" firstStartedPulling="2026-02-16 10:10:02.989731223 +0000 UTC m=+1460.682887393" lastFinishedPulling="2026-02-16 10:10:06.423154858 +0000 UTC m=+1464.116311028" observedRunningTime="2026-02-16 10:10:07.300613576 +0000 UTC m=+1464.993769756" watchObservedRunningTime="2026-02-16 10:10:07.305611721 +0000 UTC m=+1464.998767901" Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.959831 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.960158 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.960217 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.961170 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d69efd8fe9b99e84b5f788c4ef81733d235dcbd9751322ed8d1ae82ada37f8b1"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 10:10:07 crc kubenswrapper[4814]: I0216 10:10:07.961240 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://d69efd8fe9b99e84b5f788c4ef81733d235dcbd9751322ed8d1ae82ada37f8b1" gracePeriod=600 Feb 16 10:10:08 crc kubenswrapper[4814]: I0216 10:10:08.136883 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 10:10:08 crc kubenswrapper[4814]: I0216 10:10:08.233741 4814 generic.go:334] "Generic (PLEG): container finished" podID="6bbce985-e5b4-499a-af2b-8fd36ab9e13e" containerID="e06f9f390b0936e1827309c1bc5defe17169372afffe9c888c4ca1cfa15d28f9" exitCode=143 Feb 16 10:10:08 crc kubenswrapper[4814]: I0216 10:10:08.233840 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bbce985-e5b4-499a-af2b-8fd36ab9e13e","Type":"ContainerDied","Data":"e06f9f390b0936e1827309c1bc5defe17169372afffe9c888c4ca1cfa15d28f9"} Feb 16 10:10:08 crc kubenswrapper[4814]: I0216 10:10:08.245861 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="d69efd8fe9b99e84b5f788c4ef81733d235dcbd9751322ed8d1ae82ada37f8b1" exitCode=0 Feb 16 10:10:08 crc kubenswrapper[4814]: I0216 10:10:08.245965 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"d69efd8fe9b99e84b5f788c4ef81733d235dcbd9751322ed8d1ae82ada37f8b1"} Feb 16 10:10:08 crc kubenswrapper[4814]: I0216 10:10:08.246018 4814 scope.go:117] "RemoveContainer" containerID="c7db1806bf7a6e5cd75b04a931b3fd46bd321177245f8fbccf4bd3b036932bbf" Feb 16 10:10:09 crc kubenswrapper[4814]: I0216 10:10:09.273908 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40"} Feb 16 10:10:11 crc kubenswrapper[4814]: I0216 10:10:11.878612 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 10:10:11 crc kubenswrapper[4814]: I0216 10:10:11.879145 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 10:10:11 crc kubenswrapper[4814]: I0216 10:10:11.904762 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:11 crc kubenswrapper[4814]: I0216 10:10:11.963429 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 10:10:11 crc kubenswrapper[4814]: I0216 10:10:11.963768 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 10:10:12 crc kubenswrapper[4814]: I0216 10:10:12.076717 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 10:10:12 crc kubenswrapper[4814]: I0216 10:10:12.076782 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 10:10:12 crc kubenswrapper[4814]: I0216 10:10:12.112129 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 10:10:12 crc kubenswrapper[4814]: I0216 10:10:12.194486 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:12 crc kubenswrapper[4814]: I0216 10:10:12.309006 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7964bd959-r5xpf"] Feb 16 10:10:12 crc kubenswrapper[4814]: I0216 10:10:12.312620 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" podUID="8500ec66-11d7-4826-be1d-0ab947450b54" containerName="dnsmasq-dns" containerID="cri-o://2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528" gracePeriod=10 Feb 16 10:10:12 crc kubenswrapper[4814]: I0216 10:10:12.440038 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 10:10:12 crc kubenswrapper[4814]: I0216 10:10:12.919833 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 10:10:12 crc kubenswrapper[4814]: I0216 10:10:12.961820 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.053272 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.131465 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-sb\") pod \"8500ec66-11d7-4826-be1d-0ab947450b54\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.131546 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-nb\") pod \"8500ec66-11d7-4826-be1d-0ab947450b54\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.131606 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-config\") pod \"8500ec66-11d7-4826-be1d-0ab947450b54\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.131661 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55xmf\" (UniqueName: \"kubernetes.io/projected/8500ec66-11d7-4826-be1d-0ab947450b54-kube-api-access-55xmf\") pod \"8500ec66-11d7-4826-be1d-0ab947450b54\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.131688 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-svc\") pod \"8500ec66-11d7-4826-be1d-0ab947450b54\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.131716 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-swift-storage-0\") pod \"8500ec66-11d7-4826-be1d-0ab947450b54\" (UID: \"8500ec66-11d7-4826-be1d-0ab947450b54\") " Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.139044 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8500ec66-11d7-4826-be1d-0ab947450b54-kube-api-access-55xmf" (OuterVolumeSpecName: "kube-api-access-55xmf") pod "8500ec66-11d7-4826-be1d-0ab947450b54" (UID: "8500ec66-11d7-4826-be1d-0ab947450b54"). InnerVolumeSpecName "kube-api-access-55xmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.195527 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8500ec66-11d7-4826-be1d-0ab947450b54" (UID: "8500ec66-11d7-4826-be1d-0ab947450b54"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.205585 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-config" (OuterVolumeSpecName: "config") pod "8500ec66-11d7-4826-be1d-0ab947450b54" (UID: "8500ec66-11d7-4826-be1d-0ab947450b54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.227277 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8500ec66-11d7-4826-be1d-0ab947450b54" (UID: "8500ec66-11d7-4826-be1d-0ab947450b54"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.234104 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.234143 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55xmf\" (UniqueName: \"kubernetes.io/projected/8500ec66-11d7-4826-be1d-0ab947450b54-kube-api-access-55xmf\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.234154 4814 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.234163 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.235936 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8500ec66-11d7-4826-be1d-0ab947450b54" (UID: "8500ec66-11d7-4826-be1d-0ab947450b54"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.253644 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8500ec66-11d7-4826-be1d-0ab947450b54" (UID: "8500ec66-11d7-4826-be1d-0ab947450b54"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.336052 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.336121 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8500ec66-11d7-4826-be1d-0ab947450b54-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.337980 4814 generic.go:334] "Generic (PLEG): container finished" podID="f4cc5476-cf44-45e0-877d-85494accff3c" containerID="9b17d764852f065060f7a58d165a60d709fd5339df426c0f16a6f192aed24a6a" exitCode=0 Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.338079 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xjvcb" event={"ID":"f4cc5476-cf44-45e0-877d-85494accff3c","Type":"ContainerDied","Data":"9b17d764852f065060f7a58d165a60d709fd5339df426c0f16a6f192aed24a6a"} Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.341795 4814 generic.go:334] "Generic (PLEG): container finished" podID="8500ec66-11d7-4826-be1d-0ab947450b54" containerID="2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528" exitCode=0 Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.342973 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.344579 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" event={"ID":"8500ec66-11d7-4826-be1d-0ab947450b54","Type":"ContainerDied","Data":"2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528"} Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.344633 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" event={"ID":"8500ec66-11d7-4826-be1d-0ab947450b54","Type":"ContainerDied","Data":"c1138fe7c6af6158b18ec1441b00ef27f2e3cb0248182c2102446ee86d70253b"} Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.344652 4814 scope.go:117] "RemoveContainer" containerID="2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.371155 4814 scope.go:117] "RemoveContainer" containerID="35b9e0b71d9eb33fc346aea759362fa8af9efe6384b2c808eaf20f8356b592d7" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.393008 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7964bd959-r5xpf"] Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.398479 4814 scope.go:117] "RemoveContainer" containerID="2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528" Feb 16 10:10:13 crc kubenswrapper[4814]: E0216 10:10:13.399025 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528\": container with ID starting with 2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528 not found: ID does not exist" containerID="2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.399081 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528"} err="failed to get container status \"2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528\": rpc error: code = NotFound desc = could not find container \"2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528\": container with ID starting with 2da2ab0a145881f11f9b1595bba8bb3034d9e5add4aab844d014f128f6b8d528 not found: ID does not exist" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.399116 4814 scope.go:117] "RemoveContainer" containerID="35b9e0b71d9eb33fc346aea759362fa8af9efe6384b2c808eaf20f8356b592d7" Feb 16 10:10:13 crc kubenswrapper[4814]: E0216 10:10:13.399452 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35b9e0b71d9eb33fc346aea759362fa8af9efe6384b2c808eaf20f8356b592d7\": container with ID starting with 35b9e0b71d9eb33fc346aea759362fa8af9efe6384b2c808eaf20f8356b592d7 not found: ID does not exist" containerID="35b9e0b71d9eb33fc346aea759362fa8af9efe6384b2c808eaf20f8356b592d7" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.399517 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35b9e0b71d9eb33fc346aea759362fa8af9efe6384b2c808eaf20f8356b592d7"} err="failed to get container status \"35b9e0b71d9eb33fc346aea759362fa8af9efe6384b2c808eaf20f8356b592d7\": rpc error: code = NotFound desc = could not find container \"35b9e0b71d9eb33fc346aea759362fa8af9efe6384b2c808eaf20f8356b592d7\": container with ID starting with 35b9e0b71d9eb33fc346aea759362fa8af9efe6384b2c808eaf20f8356b592d7 not found: ID does not exist" Feb 16 10:10:13 crc kubenswrapper[4814]: I0216 10:10:13.409971 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7964bd959-r5xpf"] Feb 16 10:10:14 crc kubenswrapper[4814]: I0216 10:10:14.778652 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:14 crc kubenswrapper[4814]: I0216 10:10:14.972111 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq49h\" (UniqueName: \"kubernetes.io/projected/f4cc5476-cf44-45e0-877d-85494accff3c-kube-api-access-vq49h\") pod \"f4cc5476-cf44-45e0-877d-85494accff3c\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " Feb 16 10:10:14 crc kubenswrapper[4814]: I0216 10:10:14.972229 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-combined-ca-bundle\") pod \"f4cc5476-cf44-45e0-877d-85494accff3c\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " Feb 16 10:10:14 crc kubenswrapper[4814]: I0216 10:10:14.972604 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-config-data\") pod \"f4cc5476-cf44-45e0-877d-85494accff3c\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " Feb 16 10:10:14 crc kubenswrapper[4814]: I0216 10:10:14.972740 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-scripts\") pod \"f4cc5476-cf44-45e0-877d-85494accff3c\" (UID: \"f4cc5476-cf44-45e0-877d-85494accff3c\") " Feb 16 10:10:14 crc kubenswrapper[4814]: I0216 10:10:14.979651 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4cc5476-cf44-45e0-877d-85494accff3c-kube-api-access-vq49h" (OuterVolumeSpecName: "kube-api-access-vq49h") pod "f4cc5476-cf44-45e0-877d-85494accff3c" (UID: "f4cc5476-cf44-45e0-877d-85494accff3c"). InnerVolumeSpecName "kube-api-access-vq49h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:10:14 crc kubenswrapper[4814]: I0216 10:10:14.980902 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-scripts" (OuterVolumeSpecName: "scripts") pod "f4cc5476-cf44-45e0-877d-85494accff3c" (UID: "f4cc5476-cf44-45e0-877d-85494accff3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.022003 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8500ec66-11d7-4826-be1d-0ab947450b54" path="/var/lib/kubelet/pods/8500ec66-11d7-4826-be1d-0ab947450b54/volumes" Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.035491 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-config-data" (OuterVolumeSpecName: "config-data") pod "f4cc5476-cf44-45e0-877d-85494accff3c" (UID: "f4cc5476-cf44-45e0-877d-85494accff3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.052326 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4cc5476-cf44-45e0-877d-85494accff3c" (UID: "f4cc5476-cf44-45e0-877d-85494accff3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.075825 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq49h\" (UniqueName: \"kubernetes.io/projected/f4cc5476-cf44-45e0-877d-85494accff3c-kube-api-access-vq49h\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.076592 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.076660 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.076674 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4cc5476-cf44-45e0-877d-85494accff3c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.369419 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xjvcb" event={"ID":"f4cc5476-cf44-45e0-877d-85494accff3c","Type":"ContainerDied","Data":"c8f37e73fdb5547f95bebf0e79f43f8a2fb5a81de672e40051d868840b2f33cb"} Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.369463 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8f37e73fdb5547f95bebf0e79f43f8a2fb5a81de672e40051d868840b2f33cb" Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.369577 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xjvcb" Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.565253 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.565586 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="00da9365-eada-4d4d-8edf-636919e9d54d" containerName="nova-scheduler-scheduler" containerID="cri-o://753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b" gracePeriod=30 Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.579354 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.579857 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerName="nova-api-log" containerID="cri-o://21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf" gracePeriod=30 Feb 16 10:10:15 crc kubenswrapper[4814]: I0216 10:10:15.580096 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerName="nova-api-api" containerID="cri-o://e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140" gracePeriod=30 Feb 16 10:10:16 crc kubenswrapper[4814]: I0216 10:10:16.380308 4814 generic.go:334] "Generic (PLEG): container finished" podID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerID="21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf" exitCode=143 Feb 16 10:10:16 crc kubenswrapper[4814]: I0216 10:10:16.380360 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9b5d03d-bffa-4ea3-afd3-59beb082d855","Type":"ContainerDied","Data":"21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf"} Feb 16 10:10:17 crc kubenswrapper[4814]: E0216 10:10:17.083366 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 10:10:17 crc kubenswrapper[4814]: E0216 10:10:17.085913 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 10:10:17 crc kubenswrapper[4814]: E0216 10:10:17.087946 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 10:10:17 crc kubenswrapper[4814]: E0216 10:10:17.088015 4814 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="00da9365-eada-4d4d-8edf-636919e9d54d" containerName="nova-scheduler-scheduler" Feb 16 10:10:17 crc kubenswrapper[4814]: I0216 10:10:17.787172 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7964bd959-r5xpf" podUID="8500ec66-11d7-4826-be1d-0ab947450b54" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: i/o timeout" Feb 16 10:10:17 crc kubenswrapper[4814]: I0216 10:10:17.993854 4814 scope.go:117] "RemoveContainer" containerID="79a802f5d0e64fa7f55fcb2a60ea1203a92e293cf0c6150fa1a770146f12c13d" Feb 16 10:10:17 crc kubenswrapper[4814]: E0216 10:10:17.994320 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:10:18 crc kubenswrapper[4814]: I0216 10:10:18.951168 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.071362 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfdc8\" (UniqueName: \"kubernetes.io/projected/e9b5d03d-bffa-4ea3-afd3-59beb082d855-kube-api-access-zfdc8\") pod \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.071430 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-config-data\") pod \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.071490 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b5d03d-bffa-4ea3-afd3-59beb082d855-logs\") pod \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.071968 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b5d03d-bffa-4ea3-afd3-59beb082d855-logs" (OuterVolumeSpecName: "logs") pod "e9b5d03d-bffa-4ea3-afd3-59beb082d855" (UID: "e9b5d03d-bffa-4ea3-afd3-59beb082d855"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.072024 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-combined-ca-bundle\") pod \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\" (UID: \"e9b5d03d-bffa-4ea3-afd3-59beb082d855\") " Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.072469 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9b5d03d-bffa-4ea3-afd3-59beb082d855-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.098352 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-config-data" (OuterVolumeSpecName: "config-data") pod "e9b5d03d-bffa-4ea3-afd3-59beb082d855" (UID: "e9b5d03d-bffa-4ea3-afd3-59beb082d855"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.100412 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b5d03d-bffa-4ea3-afd3-59beb082d855-kube-api-access-zfdc8" (OuterVolumeSpecName: "kube-api-access-zfdc8") pod "e9b5d03d-bffa-4ea3-afd3-59beb082d855" (UID: "e9b5d03d-bffa-4ea3-afd3-59beb082d855"). InnerVolumeSpecName "kube-api-access-zfdc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.105806 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9b5d03d-bffa-4ea3-afd3-59beb082d855" (UID: "e9b5d03d-bffa-4ea3-afd3-59beb082d855"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.174478 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfdc8\" (UniqueName: \"kubernetes.io/projected/e9b5d03d-bffa-4ea3-afd3-59beb082d855-kube-api-access-zfdc8\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.174527 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.174556 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9b5d03d-bffa-4ea3-afd3-59beb082d855-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.417546 4814 generic.go:334] "Generic (PLEG): container finished" podID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerID="e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140" exitCode=0 Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.417614 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.417613 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9b5d03d-bffa-4ea3-afd3-59beb082d855","Type":"ContainerDied","Data":"e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140"} Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.418154 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e9b5d03d-bffa-4ea3-afd3-59beb082d855","Type":"ContainerDied","Data":"33a250cb2c5fda3a6acd7c17a888c239a91669ee4e485cf23117ff6bb878b5e4"} Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.418173 4814 scope.go:117] "RemoveContainer" containerID="e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.419811 4814 generic.go:334] "Generic (PLEG): container finished" podID="24af27d6-b9ee-4abc-b460-7633eb556cd7" containerID="e05fdfec37ca4b0c4b83e8ab45db009e22805b6d5e149726ed96819c763fc9a3" exitCode=0 Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.419863 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-grjp4" event={"ID":"24af27d6-b9ee-4abc-b460-7633eb556cd7","Type":"ContainerDied","Data":"e05fdfec37ca4b0c4b83e8ab45db009e22805b6d5e149726ed96819c763fc9a3"} Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.466678 4814 scope.go:117] "RemoveContainer" containerID="21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.478220 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.509962 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.521468 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:19 crc kubenswrapper[4814]: E0216 10:10:19.522240 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8500ec66-11d7-4826-be1d-0ab947450b54" containerName="dnsmasq-dns" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.522261 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="8500ec66-11d7-4826-be1d-0ab947450b54" containerName="dnsmasq-dns" Feb 16 10:10:19 crc kubenswrapper[4814]: E0216 10:10:19.522280 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerName="nova-api-log" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.522286 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerName="nova-api-log" Feb 16 10:10:19 crc kubenswrapper[4814]: E0216 10:10:19.522301 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8500ec66-11d7-4826-be1d-0ab947450b54" containerName="init" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.522307 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="8500ec66-11d7-4826-be1d-0ab947450b54" containerName="init" Feb 16 10:10:19 crc kubenswrapper[4814]: E0216 10:10:19.522322 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4cc5476-cf44-45e0-877d-85494accff3c" containerName="nova-manage" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.522327 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4cc5476-cf44-45e0-877d-85494accff3c" containerName="nova-manage" Feb 16 10:10:19 crc kubenswrapper[4814]: E0216 10:10:19.522337 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerName="nova-api-api" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.522344 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerName="nova-api-api" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.522549 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4cc5476-cf44-45e0-877d-85494accff3c" containerName="nova-manage" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.522569 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="8500ec66-11d7-4826-be1d-0ab947450b54" containerName="dnsmasq-dns" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.522579 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerName="nova-api-log" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.522590 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" containerName="nova-api-api" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.523260 4814 scope.go:117] "RemoveContainer" containerID="e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.523821 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.527260 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 10:10:19 crc kubenswrapper[4814]: E0216 10:10:19.527482 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140\": container with ID starting with e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140 not found: ID does not exist" containerID="e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.527521 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140"} err="failed to get container status \"e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140\": rpc error: code = NotFound desc = could not find container \"e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140\": container with ID starting with e8849939af0ff0cad7fd17d58215f3507ddc318d66eb015357831320d5119140 not found: ID does not exist" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.527560 4814 scope.go:117] "RemoveContainer" containerID="21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf" Feb 16 10:10:19 crc kubenswrapper[4814]: E0216 10:10:19.529004 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf\": container with ID starting with 21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf not found: ID does not exist" containerID="21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.529036 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf"} err="failed to get container status \"21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf\": rpc error: code = NotFound desc = could not find container \"21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf\": container with ID starting with 21cb232c4b38601af06345d15a2a77cfb2de55e6e18373e6f9c0dfb0fcfc93cf not found: ID does not exist" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.543663 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.690484 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-config-data\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.690564 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.690684 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xnzx\" (UniqueName: \"kubernetes.io/projected/0a70140f-a057-4f48-8bb3-75022b4934d2-kube-api-access-2xnzx\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.690815 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a70140f-a057-4f48-8bb3-75022b4934d2-logs\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.792688 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a70140f-a057-4f48-8bb3-75022b4934d2-logs\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.792754 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-config-data\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.792775 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.792867 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xnzx\" (UniqueName: \"kubernetes.io/projected/0a70140f-a057-4f48-8bb3-75022b4934d2-kube-api-access-2xnzx\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.793267 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a70140f-a057-4f48-8bb3-75022b4934d2-logs\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.797577 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.801209 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-config-data\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.817305 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xnzx\" (UniqueName: \"kubernetes.io/projected/0a70140f-a057-4f48-8bb3-75022b4934d2-kube-api-access-2xnzx\") pod \"nova-api-0\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " pod="openstack/nova-api-0" Feb 16 10:10:19 crc kubenswrapper[4814]: I0216 10:10:19.848095 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.113941 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.200244 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-combined-ca-bundle\") pod \"00da9365-eada-4d4d-8edf-636919e9d54d\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.200317 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-config-data\") pod \"00da9365-eada-4d4d-8edf-636919e9d54d\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.200505 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwv64\" (UniqueName: \"kubernetes.io/projected/00da9365-eada-4d4d-8edf-636919e9d54d-kube-api-access-bwv64\") pod \"00da9365-eada-4d4d-8edf-636919e9d54d\" (UID: \"00da9365-eada-4d4d-8edf-636919e9d54d\") " Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.209868 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00da9365-eada-4d4d-8edf-636919e9d54d-kube-api-access-bwv64" (OuterVolumeSpecName: "kube-api-access-bwv64") pod "00da9365-eada-4d4d-8edf-636919e9d54d" (UID: "00da9365-eada-4d4d-8edf-636919e9d54d"). InnerVolumeSpecName "kube-api-access-bwv64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.237241 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-config-data" (OuterVolumeSpecName: "config-data") pod "00da9365-eada-4d4d-8edf-636919e9d54d" (UID: "00da9365-eada-4d4d-8edf-636919e9d54d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.246699 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00da9365-eada-4d4d-8edf-636919e9d54d" (UID: "00da9365-eada-4d4d-8edf-636919e9d54d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.303222 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwv64\" (UniqueName: \"kubernetes.io/projected/00da9365-eada-4d4d-8edf-636919e9d54d-kube-api-access-bwv64\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.303253 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.303265 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00da9365-eada-4d4d-8edf-636919e9d54d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.368040 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:20 crc kubenswrapper[4814]: W0216 10:10:20.373383 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a70140f_a057_4f48_8bb3_75022b4934d2.slice/crio-108d5822381ef1fb3062f99018436c640d2871e269786377af06ea8151a5364e WatchSource:0}: Error finding container 108d5822381ef1fb3062f99018436c640d2871e269786377af06ea8151a5364e: Status 404 returned error can't find the container with id 108d5822381ef1fb3062f99018436c640d2871e269786377af06ea8151a5364e Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.431584 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a70140f-a057-4f48-8bb3-75022b4934d2","Type":"ContainerStarted","Data":"108d5822381ef1fb3062f99018436c640d2871e269786377af06ea8151a5364e"} Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.433178 4814 generic.go:334] "Generic (PLEG): container finished" podID="00da9365-eada-4d4d-8edf-636919e9d54d" containerID="753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b" exitCode=0 Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.433375 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.435017 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00da9365-eada-4d4d-8edf-636919e9d54d","Type":"ContainerDied","Data":"753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b"} Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.435078 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"00da9365-eada-4d4d-8edf-636919e9d54d","Type":"ContainerDied","Data":"b690d7eb9c6bedf72745f6989245b5e45c3824bb8e5596b93ff6a849f30e74d3"} Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.435101 4814 scope.go:117] "RemoveContainer" containerID="753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.472604 4814 scope.go:117] "RemoveContainer" containerID="753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b" Feb 16 10:10:20 crc kubenswrapper[4814]: E0216 10:10:20.473403 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b\": container with ID starting with 753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b not found: ID does not exist" containerID="753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.473454 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b"} err="failed to get container status \"753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b\": rpc error: code = NotFound desc = could not find container \"753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b\": container with ID starting with 753ac122f937d3707468f7b58a837fc1af5f751d00d181ca8d376be579cf653b not found: ID does not exist" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.493790 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.514670 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.528625 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:10:20 crc kubenswrapper[4814]: E0216 10:10:20.530845 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00da9365-eada-4d4d-8edf-636919e9d54d" containerName="nova-scheduler-scheduler" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.530872 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="00da9365-eada-4d4d-8edf-636919e9d54d" containerName="nova-scheduler-scheduler" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.531205 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="00da9365-eada-4d4d-8edf-636919e9d54d" containerName="nova-scheduler-scheduler" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.532597 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.538758 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.540980 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.610470 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.619012 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-config-data\") pod \"nova-scheduler-0\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.619238 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn5fh\" (UniqueName: \"kubernetes.io/projected/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-kube-api-access-bn5fh\") pod \"nova-scheduler-0\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.724985 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-config-data\") pod \"nova-scheduler-0\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.725603 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn5fh\" (UniqueName: \"kubernetes.io/projected/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-kube-api-access-bn5fh\") pod \"nova-scheduler-0\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.725657 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.744357 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-config-data\") pod \"nova-scheduler-0\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.744767 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.762292 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn5fh\" (UniqueName: \"kubernetes.io/projected/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-kube-api-access-bn5fh\") pod \"nova-scheduler-0\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " pod="openstack/nova-scheduler-0" Feb 16 10:10:20 crc kubenswrapper[4814]: I0216 10:10:20.874199 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.013354 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00da9365-eada-4d4d-8edf-636919e9d54d" path="/var/lib/kubelet/pods/00da9365-eada-4d4d-8edf-636919e9d54d/volumes" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.014369 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b5d03d-bffa-4ea3-afd3-59beb082d855" path="/var/lib/kubelet/pods/e9b5d03d-bffa-4ea3-afd3-59beb082d855/volumes" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.082671 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.246784 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-combined-ca-bundle\") pod \"24af27d6-b9ee-4abc-b460-7633eb556cd7\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.248055 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-scripts\") pod \"24af27d6-b9ee-4abc-b460-7633eb556cd7\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.248147 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkmz6\" (UniqueName: \"kubernetes.io/projected/24af27d6-b9ee-4abc-b460-7633eb556cd7-kube-api-access-gkmz6\") pod \"24af27d6-b9ee-4abc-b460-7633eb556cd7\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.248218 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-config-data\") pod \"24af27d6-b9ee-4abc-b460-7633eb556cd7\" (UID: \"24af27d6-b9ee-4abc-b460-7633eb556cd7\") " Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.255951 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24af27d6-b9ee-4abc-b460-7633eb556cd7-kube-api-access-gkmz6" (OuterVolumeSpecName: "kube-api-access-gkmz6") pod "24af27d6-b9ee-4abc-b460-7633eb556cd7" (UID: "24af27d6-b9ee-4abc-b460-7633eb556cd7"). InnerVolumeSpecName "kube-api-access-gkmz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.257804 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-scripts" (OuterVolumeSpecName: "scripts") pod "24af27d6-b9ee-4abc-b460-7633eb556cd7" (UID: "24af27d6-b9ee-4abc-b460-7633eb556cd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.285485 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24af27d6-b9ee-4abc-b460-7633eb556cd7" (UID: "24af27d6-b9ee-4abc-b460-7633eb556cd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.291277 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-config-data" (OuterVolumeSpecName: "config-data") pod "24af27d6-b9ee-4abc-b460-7633eb556cd7" (UID: "24af27d6-b9ee-4abc-b460-7633eb556cd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.351506 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.351562 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkmz6\" (UniqueName: \"kubernetes.io/projected/24af27d6-b9ee-4abc-b460-7633eb556cd7-kube-api-access-gkmz6\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.351573 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.351587 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24af27d6-b9ee-4abc-b460-7633eb556cd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.444359 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a70140f-a057-4f48-8bb3-75022b4934d2","Type":"ContainerStarted","Data":"5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0"} Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.444408 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a70140f-a057-4f48-8bb3-75022b4934d2","Type":"ContainerStarted","Data":"364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f"} Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.449989 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-grjp4" event={"ID":"24af27d6-b9ee-4abc-b460-7633eb556cd7","Type":"ContainerDied","Data":"9a93599a78da67ef7d55defcf6a49b659efbd94258a8f6ce287ee2d43d20f0fd"} Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.450048 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a93599a78da67ef7d55defcf6a49b659efbd94258a8f6ce287ee2d43d20f0fd" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.450054 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-grjp4" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.468023 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.46800674 podStartE2EDuration="2.46800674s" podCreationTimestamp="2026-02-16 10:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:10:21.462630635 +0000 UTC m=+1479.155786815" watchObservedRunningTime="2026-02-16 10:10:21.46800674 +0000 UTC m=+1479.161162920" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.534268 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.571076 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 10:10:21 crc kubenswrapper[4814]: E0216 10:10:21.571736 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24af27d6-b9ee-4abc-b460-7633eb556cd7" containerName="nova-cell1-conductor-db-sync" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.571765 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="24af27d6-b9ee-4abc-b460-7633eb556cd7" containerName="nova-cell1-conductor-db-sync" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.572010 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="24af27d6-b9ee-4abc-b460-7633eb556cd7" containerName="nova-cell1-conductor-db-sync" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.573976 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.586712 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.611636 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.660126 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1bc2b6-5ddd-4528-bd46-e63a868552dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4e1bc2b6-5ddd-4528-bd46-e63a868552dd\") " pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.660560 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr8l4\" (UniqueName: \"kubernetes.io/projected/4e1bc2b6-5ddd-4528-bd46-e63a868552dd-kube-api-access-cr8l4\") pod \"nova-cell1-conductor-0\" (UID: \"4e1bc2b6-5ddd-4528-bd46-e63a868552dd\") " pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.661056 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1bc2b6-5ddd-4528-bd46-e63a868552dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4e1bc2b6-5ddd-4528-bd46-e63a868552dd\") " pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.763408 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr8l4\" (UniqueName: \"kubernetes.io/projected/4e1bc2b6-5ddd-4528-bd46-e63a868552dd-kube-api-access-cr8l4\") pod \"nova-cell1-conductor-0\" (UID: \"4e1bc2b6-5ddd-4528-bd46-e63a868552dd\") " pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.763874 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1bc2b6-5ddd-4528-bd46-e63a868552dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4e1bc2b6-5ddd-4528-bd46-e63a868552dd\") " pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.763943 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1bc2b6-5ddd-4528-bd46-e63a868552dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4e1bc2b6-5ddd-4528-bd46-e63a868552dd\") " pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.769957 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e1bc2b6-5ddd-4528-bd46-e63a868552dd-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4e1bc2b6-5ddd-4528-bd46-e63a868552dd\") " pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.769997 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e1bc2b6-5ddd-4528-bd46-e63a868552dd-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4e1bc2b6-5ddd-4528-bd46-e63a868552dd\") " pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.782212 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr8l4\" (UniqueName: \"kubernetes.io/projected/4e1bc2b6-5ddd-4528-bd46-e63a868552dd-kube-api-access-cr8l4\") pod \"nova-cell1-conductor-0\" (UID: \"4e1bc2b6-5ddd-4528-bd46-e63a868552dd\") " pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:21 crc kubenswrapper[4814]: I0216 10:10:21.912681 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:22 crc kubenswrapper[4814]: I0216 10:10:22.461503 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed","Type":"ContainerStarted","Data":"3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942"} Feb 16 10:10:22 crc kubenswrapper[4814]: I0216 10:10:22.461825 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed","Type":"ContainerStarted","Data":"1813c8f7a39e9972e9480fcf194bb3cbe55869af8ecae1dc0b4e2ffee8c6d5b1"} Feb 16 10:10:22 crc kubenswrapper[4814]: I0216 10:10:22.480468 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.480445968 podStartE2EDuration="2.480445968s" podCreationTimestamp="2026-02-16 10:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:10:22.476694617 +0000 UTC m=+1480.169850797" watchObservedRunningTime="2026-02-16 10:10:22.480445968 +0000 UTC m=+1480.173602138" Feb 16 10:10:23 crc kubenswrapper[4814]: I0216 10:10:23.175076 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 10:10:23 crc kubenswrapper[4814]: W0216 10:10:23.186322 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e1bc2b6_5ddd_4528_bd46_e63a868552dd.slice/crio-8c279d916d8e850a02d9ace5e7584579fb1a8564803d43d5e258fa1c7bef3db5 WatchSource:0}: Error finding container 8c279d916d8e850a02d9ace5e7584579fb1a8564803d43d5e258fa1c7bef3db5: Status 404 returned error can't find the container with id 8c279d916d8e850a02d9ace5e7584579fb1a8564803d43d5e258fa1c7bef3db5 Feb 16 10:10:23 crc kubenswrapper[4814]: I0216 10:10:23.472548 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4e1bc2b6-5ddd-4528-bd46-e63a868552dd","Type":"ContainerStarted","Data":"00c90c0002bb8769d49ab56c1f1260f12716c825c8cbe1d6a788fbb21ebafffb"} Feb 16 10:10:23 crc kubenswrapper[4814]: I0216 10:10:23.473101 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:23 crc kubenswrapper[4814]: I0216 10:10:23.473143 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4e1bc2b6-5ddd-4528-bd46-e63a868552dd","Type":"ContainerStarted","Data":"8c279d916d8e850a02d9ace5e7584579fb1a8564803d43d5e258fa1c7bef3db5"} Feb 16 10:10:23 crc kubenswrapper[4814]: I0216 10:10:23.497806 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.497779328 podStartE2EDuration="2.497779328s" podCreationTimestamp="2026-02-16 10:10:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:10:23.491756555 +0000 UTC m=+1481.184912735" watchObservedRunningTime="2026-02-16 10:10:23.497779328 +0000 UTC m=+1481.190935508" Feb 16 10:10:25 crc kubenswrapper[4814]: I0216 10:10:25.874802 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 10:10:29 crc kubenswrapper[4814]: I0216 10:10:29.849298 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 10:10:29 crc kubenswrapper[4814]: I0216 10:10:29.850032 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 10:10:29 crc kubenswrapper[4814]: I0216 10:10:29.995015 4814 scope.go:117] "RemoveContainer" containerID="79a802f5d0e64fa7f55fcb2a60ea1203a92e293cf0c6150fa1a770146f12c13d" Feb 16 10:10:30 crc kubenswrapper[4814]: I0216 10:10:30.874776 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 10:10:30 crc kubenswrapper[4814]: I0216 10:10:30.909845 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 10:10:30 crc kubenswrapper[4814]: I0216 10:10:30.932690 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 10:10:30 crc kubenswrapper[4814]: I0216 10:10:30.932937 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 10:10:31 crc kubenswrapper[4814]: I0216 10:10:31.562699 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2"} Feb 16 10:10:31 crc kubenswrapper[4814]: I0216 10:10:31.615450 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 10:10:31 crc kubenswrapper[4814]: I0216 10:10:31.950625 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 10:10:32 crc kubenswrapper[4814]: I0216 10:10:32.677096 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:10:34 crc kubenswrapper[4814]: I0216 10:10:34.594666 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2" exitCode=0 Feb 16 10:10:34 crc kubenswrapper[4814]: I0216 10:10:34.594765 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2"} Feb 16 10:10:34 crc kubenswrapper[4814]: I0216 10:10:34.594851 4814 scope.go:117] "RemoveContainer" containerID="79a802f5d0e64fa7f55fcb2a60ea1203a92e293cf0c6150fa1a770146f12c13d" Feb 16 10:10:34 crc kubenswrapper[4814]: I0216 10:10:34.596409 4814 scope.go:117] "RemoveContainer" containerID="f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2" Feb 16 10:10:34 crc kubenswrapper[4814]: E0216 10:10:34.597094 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:10:36 crc kubenswrapper[4814]: I0216 10:10:36.677184 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:10:36 crc kubenswrapper[4814]: I0216 10:10:36.678501 4814 scope.go:117] "RemoveContainer" containerID="f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2" Feb 16 10:10:36 crc kubenswrapper[4814]: E0216 10:10:36.678831 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.644896 4814 generic.go:334] "Generic (PLEG): container finished" podID="6bbce985-e5b4-499a-af2b-8fd36ab9e13e" containerID="7e6e11f0f3b0f1dda117185329ebc42b8e72c57cb26834e47dde9887597eb3ef" exitCode=137 Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.645043 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bbce985-e5b4-499a-af2b-8fd36ab9e13e","Type":"ContainerDied","Data":"7e6e11f0f3b0f1dda117185329ebc42b8e72c57cb26834e47dde9887597eb3ef"} Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.658425 4814 generic.go:334] "Generic (PLEG): container finished" podID="ec7b7475-afbe-4248-8469-1cacc110749a" containerID="0fa0902f78b650e460a008de88f8b3d05fb45bb842c8ed823885fe75e2238e10" exitCode=137 Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.658498 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec7b7475-afbe-4248-8469-1cacc110749a","Type":"ContainerDied","Data":"0fa0902f78b650e460a008de88f8b3d05fb45bb842c8ed823885fe75e2238e10"} Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.676805 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.677821 4814 scope.go:117] "RemoveContainer" containerID="f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2" Feb 16 10:10:37 crc kubenswrapper[4814]: E0216 10:10:37.678223 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.728757 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.809349 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-combined-ca-bundle\") pod \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.809405 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqdw2\" (UniqueName: \"kubernetes.io/projected/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-kube-api-access-lqdw2\") pod \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.810746 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-config-data\") pod \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.810924 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-logs\") pod \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\" (UID: \"6bbce985-e5b4-499a-af2b-8fd36ab9e13e\") " Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.812642 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-logs" (OuterVolumeSpecName: "logs") pod "6bbce985-e5b4-499a-af2b-8fd36ab9e13e" (UID: "6bbce985-e5b4-499a-af2b-8fd36ab9e13e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.813681 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.825949 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-kube-api-access-lqdw2" (OuterVolumeSpecName: "kube-api-access-lqdw2") pod "6bbce985-e5b4-499a-af2b-8fd36ab9e13e" (UID: "6bbce985-e5b4-499a-af2b-8fd36ab9e13e"). InnerVolumeSpecName "kube-api-access-lqdw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.858628 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-config-data" (OuterVolumeSpecName: "config-data") pod "6bbce985-e5b4-499a-af2b-8fd36ab9e13e" (UID: "6bbce985-e5b4-499a-af2b-8fd36ab9e13e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.882464 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.888496 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bbce985-e5b4-499a-af2b-8fd36ab9e13e" (UID: "6bbce985-e5b4-499a-af2b-8fd36ab9e13e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.915676 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-config-data\") pod \"ec7b7475-afbe-4248-8469-1cacc110749a\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.915806 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-combined-ca-bundle\") pod \"ec7b7475-afbe-4248-8469-1cacc110749a\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.915898 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zxct\" (UniqueName: \"kubernetes.io/projected/ec7b7475-afbe-4248-8469-1cacc110749a-kube-api-access-8zxct\") pod \"ec7b7475-afbe-4248-8469-1cacc110749a\" (UID: \"ec7b7475-afbe-4248-8469-1cacc110749a\") " Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.916591 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.916617 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.916635 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqdw2\" (UniqueName: \"kubernetes.io/projected/6bbce985-e5b4-499a-af2b-8fd36ab9e13e-kube-api-access-lqdw2\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.920681 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec7b7475-afbe-4248-8469-1cacc110749a-kube-api-access-8zxct" (OuterVolumeSpecName: "kube-api-access-8zxct") pod "ec7b7475-afbe-4248-8469-1cacc110749a" (UID: "ec7b7475-afbe-4248-8469-1cacc110749a"). InnerVolumeSpecName "kube-api-access-8zxct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.952355 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec7b7475-afbe-4248-8469-1cacc110749a" (UID: "ec7b7475-afbe-4248-8469-1cacc110749a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:37 crc kubenswrapper[4814]: I0216 10:10:37.960170 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-config-data" (OuterVolumeSpecName: "config-data") pod "ec7b7475-afbe-4248-8469-1cacc110749a" (UID: "ec7b7475-afbe-4248-8469-1cacc110749a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.019823 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.020064 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec7b7475-afbe-4248-8469-1cacc110749a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.020165 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zxct\" (UniqueName: \"kubernetes.io/projected/ec7b7475-afbe-4248-8469-1cacc110749a-kube-api-access-8zxct\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.676819 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6bbce985-e5b4-499a-af2b-8fd36ab9e13e","Type":"ContainerDied","Data":"625012abb55eb3d7d5a75039db38b1d274668c0c73fa3ccd23cad23aaceb0b85"} Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.677125 4814 scope.go:117] "RemoveContainer" containerID="7e6e11f0f3b0f1dda117185329ebc42b8e72c57cb26834e47dde9887597eb3ef" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.676843 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.681130 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec7b7475-afbe-4248-8469-1cacc110749a","Type":"ContainerDied","Data":"2eccda878cce121a4af97bab0c0d6f7fe13cd6dbba48de4e94f367c02d3cea33"} Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.681299 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.701811 4814 scope.go:117] "RemoveContainer" containerID="e06f9f390b0936e1827309c1bc5defe17169372afffe9c888c4ca1cfa15d28f9" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.719324 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.739242 4814 scope.go:117] "RemoveContainer" containerID="0fa0902f78b650e460a008de88f8b3d05fb45bb842c8ed823885fe75e2238e10" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.740929 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.766604 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.787484 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.806312 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:10:38 crc kubenswrapper[4814]: E0216 10:10:38.806855 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bbce985-e5b4-499a-af2b-8fd36ab9e13e" containerName="nova-metadata-metadata" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.806870 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bbce985-e5b4-499a-af2b-8fd36ab9e13e" containerName="nova-metadata-metadata" Feb 16 10:10:38 crc kubenswrapper[4814]: E0216 10:10:38.806891 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7b7475-afbe-4248-8469-1cacc110749a" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.806897 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7b7475-afbe-4248-8469-1cacc110749a" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 10:10:38 crc kubenswrapper[4814]: E0216 10:10:38.806916 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bbce985-e5b4-499a-af2b-8fd36ab9e13e" containerName="nova-metadata-log" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.806923 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bbce985-e5b4-499a-af2b-8fd36ab9e13e" containerName="nova-metadata-log" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.807109 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bbce985-e5b4-499a-af2b-8fd36ab9e13e" containerName="nova-metadata-log" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.807132 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7b7475-afbe-4248-8469-1cacc110749a" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.807143 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bbce985-e5b4-499a-af2b-8fd36ab9e13e" containerName="nova-metadata-metadata" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.808224 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.811640 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.817312 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.823828 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.826414 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.829520 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.829766 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.829893 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.837491 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t8sn\" (UniqueName: \"kubernetes.io/projected/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-kube-api-access-7t8sn\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.837598 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-config-data\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.837638 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.837670 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2qg\" (UniqueName: \"kubernetes.io/projected/d6b028db-695e-4825-acd9-77ef7f1c40cc-kube-api-access-8z2qg\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.837696 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.837723 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.837747 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.837771 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.837806 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.837884 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b028db-695e-4825-acd9-77ef7f1c40cc-logs\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.839480 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.855793 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.939835 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z2qg\" (UniqueName: \"kubernetes.io/projected/d6b028db-695e-4825-acd9-77ef7f1c40cc-kube-api-access-8z2qg\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.940250 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.940290 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.940314 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.940354 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.940420 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.940591 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b028db-695e-4825-acd9-77ef7f1c40cc-logs\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.940728 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t8sn\" (UniqueName: \"kubernetes.io/projected/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-kube-api-access-7t8sn\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.940773 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-config-data\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.940824 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.941313 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b028db-695e-4825-acd9-77ef7f1c40cc-logs\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.946460 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.946827 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.947135 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-config-data\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.952234 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.952278 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.952338 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.956363 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.961289 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z2qg\" (UniqueName: \"kubernetes.io/projected/d6b028db-695e-4825-acd9-77ef7f1c40cc-kube-api-access-8z2qg\") pod \"nova-metadata-0\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " pod="openstack/nova-metadata-0" Feb 16 10:10:38 crc kubenswrapper[4814]: I0216 10:10:38.961443 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t8sn\" (UniqueName: \"kubernetes.io/projected/66ffd666-cd01-4fe7-b6a8-9c6a86abda53-kube-api-access-7t8sn\") pod \"nova-cell1-novncproxy-0\" (UID: \"66ffd666-cd01-4fe7-b6a8-9c6a86abda53\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:39 crc kubenswrapper[4814]: I0216 10:10:39.009354 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bbce985-e5b4-499a-af2b-8fd36ab9e13e" path="/var/lib/kubelet/pods/6bbce985-e5b4-499a-af2b-8fd36ab9e13e/volumes" Feb 16 10:10:39 crc kubenswrapper[4814]: I0216 10:10:39.010443 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec7b7475-afbe-4248-8469-1cacc110749a" path="/var/lib/kubelet/pods/ec7b7475-afbe-4248-8469-1cacc110749a/volumes" Feb 16 10:10:39 crc kubenswrapper[4814]: I0216 10:10:39.127476 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 10:10:39 crc kubenswrapper[4814]: I0216 10:10:39.151654 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:39 crc kubenswrapper[4814]: I0216 10:10:39.613142 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:10:39 crc kubenswrapper[4814]: I0216 10:10:39.695057 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 10:10:39 crc kubenswrapper[4814]: I0216 10:10:39.699131 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6b028db-695e-4825-acd9-77ef7f1c40cc","Type":"ContainerStarted","Data":"b1065176dad9c1605f310d5c532f905b304dd76f8e2397fb4aa0f8c9b6aadb38"} Feb 16 10:10:39 crc kubenswrapper[4814]: W0216 10:10:39.703526 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66ffd666_cd01_4fe7_b6a8_9c6a86abda53.slice/crio-390cdbfc9871acd35a2818623a5a42a0b25abc46ee3e92c0dffd9b724c9624f2 WatchSource:0}: Error finding container 390cdbfc9871acd35a2818623a5a42a0b25abc46ee3e92c0dffd9b724c9624f2: Status 404 returned error can't find the container with id 390cdbfc9871acd35a2818623a5a42a0b25abc46ee3e92c0dffd9b724c9624f2 Feb 16 10:10:39 crc kubenswrapper[4814]: I0216 10:10:39.858686 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 10:10:39 crc kubenswrapper[4814]: I0216 10:10:39.859398 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 10:10:39 crc kubenswrapper[4814]: I0216 10:10:39.865013 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 10:10:39 crc kubenswrapper[4814]: I0216 10:10:39.873460 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.717078 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6b028db-695e-4825-acd9-77ef7f1c40cc","Type":"ContainerStarted","Data":"a756a519661f8b29f5e4ade3a3a2ef4679cf0afc59b8fa735038f13f3e37e05d"} Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.718061 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6b028db-695e-4825-acd9-77ef7f1c40cc","Type":"ContainerStarted","Data":"ae57e0aee747e053e38d12c28d053adae997439f7da6837dccddd8d9a9d67c1b"} Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.719374 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"66ffd666-cd01-4fe7-b6a8-9c6a86abda53","Type":"ContainerStarted","Data":"3c579e0fd06a3f141484e525d8e3a5f89944752277c319df92f063b6f0066021"} Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.719428 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"66ffd666-cd01-4fe7-b6a8-9c6a86abda53","Type":"ContainerStarted","Data":"390cdbfc9871acd35a2818623a5a42a0b25abc46ee3e92c0dffd9b724c9624f2"} Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.719650 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.745689 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.745663179 podStartE2EDuration="2.745663179s" podCreationTimestamp="2026-02-16 10:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:10:40.739889773 +0000 UTC m=+1498.433045973" watchObservedRunningTime="2026-02-16 10:10:40.745663179 +0000 UTC m=+1498.438819369" Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.760398 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.778000 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.777976781 podStartE2EDuration="2.777976781s" podCreationTimestamp="2026-02-16 10:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:10:40.761971899 +0000 UTC m=+1498.455128069" watchObservedRunningTime="2026-02-16 10:10:40.777976781 +0000 UTC m=+1498.471132961" Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.954274 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bcc884bbc-bvmwv"] Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.956140 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.969170 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bcc884bbc-bvmwv"] Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.982736 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-config\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.982787 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-ovsdbserver-sb\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.982809 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-dns-swift-storage-0\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.982836 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgsfk\" (UniqueName: \"kubernetes.io/projected/974dd886-6966-4ab1-a46f-1c9a4973cb31-kube-api-access-xgsfk\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.982856 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-dns-svc\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:40 crc kubenswrapper[4814]: I0216 10:10:40.982890 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-ovsdbserver-nb\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.085039 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgsfk\" (UniqueName: \"kubernetes.io/projected/974dd886-6966-4ab1-a46f-1c9a4973cb31-kube-api-access-xgsfk\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.085287 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-dns-svc\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.085349 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-ovsdbserver-nb\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.085462 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-config\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.085495 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-ovsdbserver-sb\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.085514 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-dns-swift-storage-0\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.092116 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-config\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.092701 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-ovsdbserver-nb\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.096927 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-dns-swift-storage-0\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.098213 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-ovsdbserver-sb\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.106160 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974dd886-6966-4ab1-a46f-1c9a4973cb31-dns-svc\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.147487 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgsfk\" (UniqueName: \"kubernetes.io/projected/974dd886-6966-4ab1-a46f-1c9a4973cb31-kube-api-access-xgsfk\") pod \"dnsmasq-dns-6bcc884bbc-bvmwv\" (UID: \"974dd886-6966-4ab1-a46f-1c9a4973cb31\") " pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.299301 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:41 crc kubenswrapper[4814]: I0216 10:10:41.959253 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bcc884bbc-bvmwv"] Feb 16 10:10:42 crc kubenswrapper[4814]: I0216 10:10:42.744687 4814 generic.go:334] "Generic (PLEG): container finished" podID="974dd886-6966-4ab1-a46f-1c9a4973cb31" containerID="0a12be6193277ab43e454313f383cd2c7a8ad74f5c6f54551587dcfccb55b02a" exitCode=0 Feb 16 10:10:42 crc kubenswrapper[4814]: I0216 10:10:42.744805 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" event={"ID":"974dd886-6966-4ab1-a46f-1c9a4973cb31","Type":"ContainerDied","Data":"0a12be6193277ab43e454313f383cd2c7a8ad74f5c6f54551587dcfccb55b02a"} Feb 16 10:10:42 crc kubenswrapper[4814]: I0216 10:10:42.744987 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" event={"ID":"974dd886-6966-4ab1-a46f-1c9a4973cb31","Type":"ContainerStarted","Data":"311a7d669e9f3655fe8d871cec858274fc694e43cdd517bfbebfe4a72194cac8"} Feb 16 10:10:43 crc kubenswrapper[4814]: I0216 10:10:43.757080 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" event={"ID":"974dd886-6966-4ab1-a46f-1c9a4973cb31","Type":"ContainerStarted","Data":"3f0b5bf82d3d54a44435d6dfaa674962d47b30b9dc0bd2b55d7d26bd95d80496"} Feb 16 10:10:43 crc kubenswrapper[4814]: I0216 10:10:43.757612 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:43 crc kubenswrapper[4814]: I0216 10:10:43.796437 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:43 crc kubenswrapper[4814]: I0216 10:10:43.796755 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerName="nova-api-log" containerID="cri-o://364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f" gracePeriod=30 Feb 16 10:10:43 crc kubenswrapper[4814]: I0216 10:10:43.796875 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerName="nova-api-api" containerID="cri-o://5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0" gracePeriod=30 Feb 16 10:10:43 crc kubenswrapper[4814]: I0216 10:10:43.814593 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" podStartSLOduration=3.814568186 podStartE2EDuration="3.814568186s" podCreationTimestamp="2026-02-16 10:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:10:43.79026793 +0000 UTC m=+1501.483424110" watchObservedRunningTime="2026-02-16 10:10:43.814568186 +0000 UTC m=+1501.507724366" Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.127835 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.128191 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.152772 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.249528 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.249976 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="ceilometer-central-agent" containerID="cri-o://134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791" gracePeriod=30 Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.254968 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="proxy-httpd" containerID="cri-o://ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba" gracePeriod=30 Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.255195 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="sg-core" containerID="cri-o://1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250" gracePeriod=30 Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.255241 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="ceilometer-notification-agent" containerID="cri-o://e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812" gracePeriod=30 Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.771059 4814 generic.go:334] "Generic (PLEG): container finished" podID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerID="ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba" exitCode=0 Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.771096 4814 generic.go:334] "Generic (PLEG): container finished" podID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerID="1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250" exitCode=2 Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.771175 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a57872d9-3772-4b6b-b87b-543531bff0d7","Type":"ContainerDied","Data":"ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba"} Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.771236 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a57872d9-3772-4b6b-b87b-543531bff0d7","Type":"ContainerDied","Data":"1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250"} Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.773908 4814 generic.go:334] "Generic (PLEG): container finished" podID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerID="364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f" exitCode=143 Feb 16 10:10:44 crc kubenswrapper[4814]: I0216 10:10:44.773983 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a70140f-a057-4f48-8bb3-75022b4934d2","Type":"ContainerDied","Data":"364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f"} Feb 16 10:10:45 crc kubenswrapper[4814]: I0216 10:10:45.788130 4814 generic.go:334] "Generic (PLEG): container finished" podID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerID="134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791" exitCode=0 Feb 16 10:10:45 crc kubenswrapper[4814]: I0216 10:10:45.788228 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a57872d9-3772-4b6b-b87b-543531bff0d7","Type":"ContainerDied","Data":"134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791"} Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.272787 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.344642 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a70140f-a057-4f48-8bb3-75022b4934d2-logs\") pod \"0a70140f-a057-4f48-8bb3-75022b4934d2\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.344718 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xnzx\" (UniqueName: \"kubernetes.io/projected/0a70140f-a057-4f48-8bb3-75022b4934d2-kube-api-access-2xnzx\") pod \"0a70140f-a057-4f48-8bb3-75022b4934d2\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.344795 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-config-data\") pod \"0a70140f-a057-4f48-8bb3-75022b4934d2\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.344825 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-combined-ca-bundle\") pod \"0a70140f-a057-4f48-8bb3-75022b4934d2\" (UID: \"0a70140f-a057-4f48-8bb3-75022b4934d2\") " Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.361097 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a70140f-a057-4f48-8bb3-75022b4934d2-kube-api-access-2xnzx" (OuterVolumeSpecName: "kube-api-access-2xnzx") pod "0a70140f-a057-4f48-8bb3-75022b4934d2" (UID: "0a70140f-a057-4f48-8bb3-75022b4934d2"). InnerVolumeSpecName "kube-api-access-2xnzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.361818 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a70140f-a057-4f48-8bb3-75022b4934d2-logs" (OuterVolumeSpecName: "logs") pod "0a70140f-a057-4f48-8bb3-75022b4934d2" (UID: "0a70140f-a057-4f48-8bb3-75022b4934d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.401942 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-config-data" (OuterVolumeSpecName: "config-data") pod "0a70140f-a057-4f48-8bb3-75022b4934d2" (UID: "0a70140f-a057-4f48-8bb3-75022b4934d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.432695 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a70140f-a057-4f48-8bb3-75022b4934d2" (UID: "0a70140f-a057-4f48-8bb3-75022b4934d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.457249 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a70140f-a057-4f48-8bb3-75022b4934d2-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.457284 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xnzx\" (UniqueName: \"kubernetes.io/projected/0a70140f-a057-4f48-8bb3-75022b4934d2-kube-api-access-2xnzx\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.457298 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.457311 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a70140f-a057-4f48-8bb3-75022b4934d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.800128 4814 generic.go:334] "Generic (PLEG): container finished" podID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerID="5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0" exitCode=0 Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.800184 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a70140f-a057-4f48-8bb3-75022b4934d2","Type":"ContainerDied","Data":"5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0"} Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.800218 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a70140f-a057-4f48-8bb3-75022b4934d2","Type":"ContainerDied","Data":"108d5822381ef1fb3062f99018436c640d2871e269786377af06ea8151a5364e"} Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.800242 4814 scope.go:117] "RemoveContainer" containerID="5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.800412 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.838711 4814 scope.go:117] "RemoveContainer" containerID="364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.864880 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.871156 4814 scope.go:117] "RemoveContainer" containerID="5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0" Feb 16 10:10:46 crc kubenswrapper[4814]: E0216 10:10:46.871797 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0\": container with ID starting with 5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0 not found: ID does not exist" containerID="5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.871834 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0"} err="failed to get container status \"5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0\": rpc error: code = NotFound desc = could not find container \"5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0\": container with ID starting with 5d33dd351008ec5481b51c6ae20910495bd83e67557407c91128e92ee8891bb0 not found: ID does not exist" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.871858 4814 scope.go:117] "RemoveContainer" containerID="364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f" Feb 16 10:10:46 crc kubenswrapper[4814]: E0216 10:10:46.872500 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f\": container with ID starting with 364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f not found: ID does not exist" containerID="364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.872584 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f"} err="failed to get container status \"364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f\": rpc error: code = NotFound desc = could not find container \"364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f\": container with ID starting with 364d9a59b874837f1b547b1f77f99080f06426176a4780ffe11179007a55752f not found: ID does not exist" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.877796 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.898402 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:46 crc kubenswrapper[4814]: E0216 10:10:46.899079 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerName="nova-api-log" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.899106 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerName="nova-api-log" Feb 16 10:10:46 crc kubenswrapper[4814]: E0216 10:10:46.899139 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerName="nova-api-api" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.899148 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerName="nova-api-api" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.899406 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerName="nova-api-log" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.899442 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a70140f-a057-4f48-8bb3-75022b4934d2" containerName="nova-api-api" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.900977 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.903792 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.903938 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.904033 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.929762 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.966847 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-public-tls-certs\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.967193 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6qx4\" (UniqueName: \"kubernetes.io/projected/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-kube-api-access-w6qx4\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.967273 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.967361 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-config-data\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.967486 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:46 crc kubenswrapper[4814]: I0216 10:10:46.967700 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-logs\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.005783 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a70140f-a057-4f48-8bb3-75022b4934d2" path="/var/lib/kubelet/pods/0a70140f-a057-4f48-8bb3-75022b4934d2/volumes" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.068762 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6qx4\" (UniqueName: \"kubernetes.io/projected/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-kube-api-access-w6qx4\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.068989 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.069070 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-config-data\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.069129 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.069282 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-logs\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.069345 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-public-tls-certs\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.070359 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-logs\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.074667 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.075420 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.078177 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-public-tls-certs\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.086363 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-config-data\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.096316 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6qx4\" (UniqueName: \"kubernetes.io/projected/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-kube-api-access-w6qx4\") pod \"nova-api-0\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.227229 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.751504 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:10:47 crc kubenswrapper[4814]: I0216 10:10:47.813172 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df76dd96-361c-4bd3-8bcb-02b27bab9ac1","Type":"ContainerStarted","Data":"4f15ea9fa77b84f38a02b8e0ab2964552aba1321bcd7dbba0917f07d3c8ce7b0"} Feb 16 10:10:48 crc kubenswrapper[4814]: I0216 10:10:48.833086 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df76dd96-361c-4bd3-8bcb-02b27bab9ac1","Type":"ContainerStarted","Data":"179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60"} Feb 16 10:10:48 crc kubenswrapper[4814]: I0216 10:10:48.833570 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df76dd96-361c-4bd3-8bcb-02b27bab9ac1","Type":"ContainerStarted","Data":"0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842"} Feb 16 10:10:48 crc kubenswrapper[4814]: I0216 10:10:48.869039 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.869010188 podStartE2EDuration="2.869010188s" podCreationTimestamp="2026-02-16 10:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:10:48.8646613 +0000 UTC m=+1506.557817520" watchObservedRunningTime="2026-02-16 10:10:48.869010188 +0000 UTC m=+1506.562166368" Feb 16 10:10:49 crc kubenswrapper[4814]: I0216 10:10:49.128699 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 10:10:49 crc kubenswrapper[4814]: I0216 10:10:49.128766 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 10:10:49 crc kubenswrapper[4814]: I0216 10:10:49.152402 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:49 crc kubenswrapper[4814]: I0216 10:10:49.426109 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:49 crc kubenswrapper[4814]: I0216 10:10:49.868783 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.072586 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6lgml"] Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.074166 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.082034 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.082321 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.085930 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6lgml"] Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.178786 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.178816 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.222:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.251183 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnlrx\" (UniqueName: \"kubernetes.io/projected/54488708-2f13-4ecc-a7a3-fb7372dc39ee-kube-api-access-gnlrx\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.251274 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-scripts\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.251294 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-config-data\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.251334 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.353901 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnlrx\" (UniqueName: \"kubernetes.io/projected/54488708-2f13-4ecc-a7a3-fb7372dc39ee-kube-api-access-gnlrx\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.354348 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-scripts\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.354483 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-config-data\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.354658 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.360465 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-config-data\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.362222 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.368238 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-scripts\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.376479 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnlrx\" (UniqueName: \"kubernetes.io/projected/54488708-2f13-4ecc-a7a3-fb7372dc39ee-kube-api-access-gnlrx\") pod \"nova-cell1-cell-mapping-6lgml\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:50 crc kubenswrapper[4814]: I0216 10:10:50.416217 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:51 crc kubenswrapper[4814]: I0216 10:10:50.998074 4814 scope.go:117] "RemoveContainer" containerID="f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2" Feb 16 10:10:51 crc kubenswrapper[4814]: E0216 10:10:50.998777 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:10:51 crc kubenswrapper[4814]: I0216 10:10:51.007218 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6lgml"] Feb 16 10:10:51 crc kubenswrapper[4814]: I0216 10:10:51.302341 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bcc884bbc-bvmwv" Feb 16 10:10:51 crc kubenswrapper[4814]: I0216 10:10:51.434053 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76d8bd6559-6rc56"] Feb 16 10:10:51 crc kubenswrapper[4814]: I0216 10:10:51.434884 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" podUID="cb4668be-2f35-40f3-b565-5e2870feba0f" containerName="dnsmasq-dns" containerID="cri-o://6bfbcf62ee3d4fe0743ee965b85aaee24eb46090318572760e8a78caf1b65f47" gracePeriod=10 Feb 16 10:10:51 crc kubenswrapper[4814]: E0216 10:10:51.684217 4814 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb4668be_2f35_40f3_b565_5e2870feba0f.slice/crio-6bfbcf62ee3d4fe0743ee965b85aaee24eb46090318572760e8a78caf1b65f47.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb4668be_2f35_40f3_b565_5e2870feba0f.slice/crio-conmon-6bfbcf62ee3d4fe0743ee965b85aaee24eb46090318572760e8a78caf1b65f47.scope\": RecentStats: unable to find data in memory cache]" Feb 16 10:10:51 crc kubenswrapper[4814]: I0216 10:10:51.873318 4814 generic.go:334] "Generic (PLEG): container finished" podID="cb4668be-2f35-40f3-b565-5e2870feba0f" containerID="6bfbcf62ee3d4fe0743ee965b85aaee24eb46090318572760e8a78caf1b65f47" exitCode=0 Feb 16 10:10:51 crc kubenswrapper[4814]: I0216 10:10:51.873699 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" event={"ID":"cb4668be-2f35-40f3-b565-5e2870feba0f","Type":"ContainerDied","Data":"6bfbcf62ee3d4fe0743ee965b85aaee24eb46090318572760e8a78caf1b65f47"} Feb 16 10:10:51 crc kubenswrapper[4814]: I0216 10:10:51.876029 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6lgml" event={"ID":"54488708-2f13-4ecc-a7a3-fb7372dc39ee","Type":"ContainerStarted","Data":"2c90e2aa8697c3fcee437225964bfe9ec69dfea4e3df918ca42aa2ed408e7c3d"} Feb 16 10:10:51 crc kubenswrapper[4814]: I0216 10:10:51.876092 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6lgml" event={"ID":"54488708-2f13-4ecc-a7a3-fb7372dc39ee","Type":"ContainerStarted","Data":"7f6853527ae83d5ed89afc7db5b1d75fff980d379aee57df9d2fe0a4af7d804f"} Feb 16 10:10:51 crc kubenswrapper[4814]: I0216 10:10:51.897599 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6lgml" podStartSLOduration=1.897579417 podStartE2EDuration="1.897579417s" podCreationTimestamp="2026-02-16 10:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:10:51.89731342 +0000 UTC m=+1509.590469600" watchObservedRunningTime="2026-02-16 10:10:51.897579417 +0000 UTC m=+1509.590735597" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.040871 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.218715 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-config\") pod \"cb4668be-2f35-40f3-b565-5e2870feba0f\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.218973 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-svc\") pod \"cb4668be-2f35-40f3-b565-5e2870feba0f\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.219924 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-nb\") pod \"cb4668be-2f35-40f3-b565-5e2870feba0f\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.220002 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-sb\") pod \"cb4668be-2f35-40f3-b565-5e2870feba0f\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.220051 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llxkf\" (UniqueName: \"kubernetes.io/projected/cb4668be-2f35-40f3-b565-5e2870feba0f-kube-api-access-llxkf\") pod \"cb4668be-2f35-40f3-b565-5e2870feba0f\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.220271 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-swift-storage-0\") pod \"cb4668be-2f35-40f3-b565-5e2870feba0f\" (UID: \"cb4668be-2f35-40f3-b565-5e2870feba0f\") " Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.227513 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb4668be-2f35-40f3-b565-5e2870feba0f-kube-api-access-llxkf" (OuterVolumeSpecName: "kube-api-access-llxkf") pod "cb4668be-2f35-40f3-b565-5e2870feba0f" (UID: "cb4668be-2f35-40f3-b565-5e2870feba0f"). InnerVolumeSpecName "kube-api-access-llxkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.280653 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cb4668be-2f35-40f3-b565-5e2870feba0f" (UID: "cb4668be-2f35-40f3-b565-5e2870feba0f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.287008 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb4668be-2f35-40f3-b565-5e2870feba0f" (UID: "cb4668be-2f35-40f3-b565-5e2870feba0f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.320914 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-config" (OuterVolumeSpecName: "config") pod "cb4668be-2f35-40f3-b565-5e2870feba0f" (UID: "cb4668be-2f35-40f3-b565-5e2870feba0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.324940 4814 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-config\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.324962 4814 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.324973 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.324984 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llxkf\" (UniqueName: \"kubernetes.io/projected/cb4668be-2f35-40f3-b565-5e2870feba0f-kube-api-access-llxkf\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.343502 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cb4668be-2f35-40f3-b565-5e2870feba0f" (UID: "cb4668be-2f35-40f3-b565-5e2870feba0f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.377146 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cb4668be-2f35-40f3-b565-5e2870feba0f" (UID: "cb4668be-2f35-40f3-b565-5e2870feba0f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.431028 4814 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.431065 4814 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb4668be-2f35-40f3-b565-5e2870feba0f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.888377 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.888369 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76d8bd6559-6rc56" event={"ID":"cb4668be-2f35-40f3-b565-5e2870feba0f","Type":"ContainerDied","Data":"f72dd998448bfa3a35d6648f36fe01bac71f057df5e21d3a00a6dcc717afc8e4"} Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.888790 4814 scope.go:117] "RemoveContainer" containerID="6bfbcf62ee3d4fe0743ee965b85aaee24eb46090318572760e8a78caf1b65f47" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.914344 4814 scope.go:117] "RemoveContainer" containerID="a4533cf677dba205f1de5578f8320b09c655bf8a777830aa1ee151cb3ba0babe" Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.932954 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76d8bd6559-6rc56"] Feb 16 10:10:52 crc kubenswrapper[4814]: I0216 10:10:52.948388 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76d8bd6559-6rc56"] Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.006868 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb4668be-2f35-40f3-b565-5e2870feba0f" path="/var/lib/kubelet/pods/cb4668be-2f35-40f3-b565-5e2870feba0f/volumes" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.587036 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.777140 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-run-httpd\") pod \"a57872d9-3772-4b6b-b87b-543531bff0d7\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.777475 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbghn\" (UniqueName: \"kubernetes.io/projected/a57872d9-3772-4b6b-b87b-543531bff0d7-kube-api-access-vbghn\") pod \"a57872d9-3772-4b6b-b87b-543531bff0d7\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.777624 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-sg-core-conf-yaml\") pod \"a57872d9-3772-4b6b-b87b-543531bff0d7\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.777777 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-ceilometer-tls-certs\") pod \"a57872d9-3772-4b6b-b87b-543531bff0d7\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.777826 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-config-data\") pod \"a57872d9-3772-4b6b-b87b-543531bff0d7\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.777885 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-combined-ca-bundle\") pod \"a57872d9-3772-4b6b-b87b-543531bff0d7\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.777928 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-log-httpd\") pod \"a57872d9-3772-4b6b-b87b-543531bff0d7\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.777960 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-scripts\") pod \"a57872d9-3772-4b6b-b87b-543531bff0d7\" (UID: \"a57872d9-3772-4b6b-b87b-543531bff0d7\") " Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.780066 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a57872d9-3772-4b6b-b87b-543531bff0d7" (UID: "a57872d9-3772-4b6b-b87b-543531bff0d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.782304 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a57872d9-3772-4b6b-b87b-543531bff0d7" (UID: "a57872d9-3772-4b6b-b87b-543531bff0d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.784516 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-scripts" (OuterVolumeSpecName: "scripts") pod "a57872d9-3772-4b6b-b87b-543531bff0d7" (UID: "a57872d9-3772-4b6b-b87b-543531bff0d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.785051 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a57872d9-3772-4b6b-b87b-543531bff0d7-kube-api-access-vbghn" (OuterVolumeSpecName: "kube-api-access-vbghn") pod "a57872d9-3772-4b6b-b87b-543531bff0d7" (UID: "a57872d9-3772-4b6b-b87b-543531bff0d7"). InnerVolumeSpecName "kube-api-access-vbghn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.821638 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a57872d9-3772-4b6b-b87b-543531bff0d7" (UID: "a57872d9-3772-4b6b-b87b-543531bff0d7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.856771 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a57872d9-3772-4b6b-b87b-543531bff0d7" (UID: "a57872d9-3772-4b6b-b87b-543531bff0d7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.880214 4814 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.880244 4814 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.880253 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.880262 4814 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a57872d9-3772-4b6b-b87b-543531bff0d7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.880272 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbghn\" (UniqueName: \"kubernetes.io/projected/a57872d9-3772-4b6b-b87b-543531bff0d7-kube-api-access-vbghn\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.880283 4814 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.890726 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a57872d9-3772-4b6b-b87b-543531bff0d7" (UID: "a57872d9-3772-4b6b-b87b-543531bff0d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.902847 4814 generic.go:334] "Generic (PLEG): container finished" podID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerID="e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812" exitCode=0 Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.902925 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a57872d9-3772-4b6b-b87b-543531bff0d7","Type":"ContainerDied","Data":"e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812"} Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.902957 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a57872d9-3772-4b6b-b87b-543531bff0d7","Type":"ContainerDied","Data":"6984427bf844eebc733fa300dd66d33e0b6616507281e3d8121632617b8fe8c3"} Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.902981 4814 scope.go:117] "RemoveContainer" containerID="ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.903137 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.926503 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-config-data" (OuterVolumeSpecName: "config-data") pod "a57872d9-3772-4b6b-b87b-543531bff0d7" (UID: "a57872d9-3772-4b6b-b87b-543531bff0d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.962895 4814 scope.go:117] "RemoveContainer" containerID="1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.981811 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.981835 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a57872d9-3772-4b6b-b87b-543531bff0d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:53 crc kubenswrapper[4814]: I0216 10:10:53.985447 4814 scope.go:117] "RemoveContainer" containerID="e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.012158 4814 scope.go:117] "RemoveContainer" containerID="134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.047571 4814 scope.go:117] "RemoveContainer" containerID="ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba" Feb 16 10:10:54 crc kubenswrapper[4814]: E0216 10:10:54.048235 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba\": container with ID starting with ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba not found: ID does not exist" containerID="ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.048285 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba"} err="failed to get container status \"ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba\": rpc error: code = NotFound desc = could not find container \"ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba\": container with ID starting with ec5aec3891932a2ef90cf6f345efb5a77691c72bb27fb6611a4cef0b16a2afba not found: ID does not exist" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.048314 4814 scope.go:117] "RemoveContainer" containerID="1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250" Feb 16 10:10:54 crc kubenswrapper[4814]: E0216 10:10:54.052776 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250\": container with ID starting with 1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250 not found: ID does not exist" containerID="1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.052825 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250"} err="failed to get container status \"1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250\": rpc error: code = NotFound desc = could not find container \"1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250\": container with ID starting with 1b7e11964ac567f4feacd4b8c79aa4d35c1ed26b8c8d317cf4c977f9bb892250 not found: ID does not exist" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.052856 4814 scope.go:117] "RemoveContainer" containerID="e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812" Feb 16 10:10:54 crc kubenswrapper[4814]: E0216 10:10:54.053262 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812\": container with ID starting with e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812 not found: ID does not exist" containerID="e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.053312 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812"} err="failed to get container status \"e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812\": rpc error: code = NotFound desc = could not find container \"e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812\": container with ID starting with e09ea2f73c52888a4faa432468defc3933d323df59562d4f586acb33fe4e1812 not found: ID does not exist" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.053340 4814 scope.go:117] "RemoveContainer" containerID="134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791" Feb 16 10:10:54 crc kubenswrapper[4814]: E0216 10:10:54.053719 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791\": container with ID starting with 134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791 not found: ID does not exist" containerID="134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.053741 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791"} err="failed to get container status \"134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791\": rpc error: code = NotFound desc = could not find container \"134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791\": container with ID starting with 134f9c8ae9604f9e592cffcf5f51a0e57082405b539a04a20356f0baee9f2791 not found: ID does not exist" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.251478 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.293842 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.325717 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:10:54 crc kubenswrapper[4814]: E0216 10:10:54.326315 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="ceilometer-central-agent" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.326340 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="ceilometer-central-agent" Feb 16 10:10:54 crc kubenswrapper[4814]: E0216 10:10:54.326351 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="ceilometer-notification-agent" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.326358 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="ceilometer-notification-agent" Feb 16 10:10:54 crc kubenswrapper[4814]: E0216 10:10:54.326386 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb4668be-2f35-40f3-b565-5e2870feba0f" containerName="init" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.326393 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb4668be-2f35-40f3-b565-5e2870feba0f" containerName="init" Feb 16 10:10:54 crc kubenswrapper[4814]: E0216 10:10:54.326415 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb4668be-2f35-40f3-b565-5e2870feba0f" containerName="dnsmasq-dns" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.326423 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb4668be-2f35-40f3-b565-5e2870feba0f" containerName="dnsmasq-dns" Feb 16 10:10:54 crc kubenswrapper[4814]: E0216 10:10:54.326435 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="proxy-httpd" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.326442 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="proxy-httpd" Feb 16 10:10:54 crc kubenswrapper[4814]: E0216 10:10:54.326459 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="sg-core" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.326466 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="sg-core" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.326667 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="sg-core" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.326689 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="ceilometer-notification-agent" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.326697 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb4668be-2f35-40f3-b565-5e2870feba0f" containerName="dnsmasq-dns" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.326713 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="proxy-httpd" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.326722 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" containerName="ceilometer-central-agent" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.329602 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.335807 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.336103 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.339714 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.347560 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.496209 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-log-httpd\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.496299 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.496386 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-scripts\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.496463 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-run-httpd\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.496502 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fct5h\" (UniqueName: \"kubernetes.io/projected/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-kube-api-access-fct5h\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.496562 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.496600 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.496626 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-config-data\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.599271 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-run-httpd\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.599359 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fct5h\" (UniqueName: \"kubernetes.io/projected/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-kube-api-access-fct5h\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.599406 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.599450 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.599478 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-config-data\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.599570 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-log-httpd\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.599637 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.599973 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-run-httpd\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.600139 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-log-httpd\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.600662 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-scripts\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.604860 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-scripts\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.605028 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-config-data\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.605154 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.612797 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.613304 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.616343 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fct5h\" (UniqueName: \"kubernetes.io/projected/42c5d783-c68b-4e93-bfb3-1fe359b14e8a-kube-api-access-fct5h\") pod \"ceilometer-0\" (UID: \"42c5d783-c68b-4e93-bfb3-1fe359b14e8a\") " pod="openstack/ceilometer-0" Feb 16 10:10:54 crc kubenswrapper[4814]: I0216 10:10:54.656587 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 10:10:55 crc kubenswrapper[4814]: I0216 10:10:55.007803 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a57872d9-3772-4b6b-b87b-543531bff0d7" path="/var/lib/kubelet/pods/a57872d9-3772-4b6b-b87b-543531bff0d7/volumes" Feb 16 10:10:55 crc kubenswrapper[4814]: I0216 10:10:55.122608 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 10:10:55 crc kubenswrapper[4814]: I0216 10:10:55.933984 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c5d783-c68b-4e93-bfb3-1fe359b14e8a","Type":"ContainerStarted","Data":"a1d58c2a4ffe17025278c8d6327c9ecfb0a3b0571588b3dbfe6a3fff5e3d5d89"} Feb 16 10:10:55 crc kubenswrapper[4814]: I0216 10:10:55.934583 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c5d783-c68b-4e93-bfb3-1fe359b14e8a","Type":"ContainerStarted","Data":"dc64947898cf0a2c83704d2989b29ef6a42b01f037a78cd6a40d6b2132a9c272"} Feb 16 10:10:55 crc kubenswrapper[4814]: I0216 10:10:55.934601 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c5d783-c68b-4e93-bfb3-1fe359b14e8a","Type":"ContainerStarted","Data":"3ef05e60bedbb53e56e0d9279d5e0bc54a5cc1c793982c1718540a7b49627817"} Feb 16 10:10:56 crc kubenswrapper[4814]: I0216 10:10:56.952217 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c5d783-c68b-4e93-bfb3-1fe359b14e8a","Type":"ContainerStarted","Data":"7000fdb453bb9b0e5f5eef8875f9b5be8091818d43f1a6926e1ee1cfddf6a674"} Feb 16 10:10:57 crc kubenswrapper[4814]: I0216 10:10:57.227802 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 10:10:57 crc kubenswrapper[4814]: I0216 10:10:57.227870 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 10:10:57 crc kubenswrapper[4814]: I0216 10:10:57.964046 4814 generic.go:334] "Generic (PLEG): container finished" podID="54488708-2f13-4ecc-a7a3-fb7372dc39ee" containerID="2c90e2aa8697c3fcee437225964bfe9ec69dfea4e3df918ca42aa2ed408e7c3d" exitCode=0 Feb 16 10:10:57 crc kubenswrapper[4814]: I0216 10:10:57.964094 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6lgml" event={"ID":"54488708-2f13-4ecc-a7a3-fb7372dc39ee","Type":"ContainerDied","Data":"2c90e2aa8697c3fcee437225964bfe9ec69dfea4e3df918ca42aa2ed408e7c3d"} Feb 16 10:10:58 crc kubenswrapper[4814]: I0216 10:10:58.245772 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 10:10:58 crc kubenswrapper[4814]: I0216 10:10:58.245939 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 10:10:58 crc kubenswrapper[4814]: I0216 10:10:58.979877 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42c5d783-c68b-4e93-bfb3-1fe359b14e8a","Type":"ContainerStarted","Data":"d0bff1f876d40cfefcb9ca2aa263c20516d1f89ac8f3da0123c0fd85d66abcd4"} Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.018763 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.407533887 podStartE2EDuration="5.018741683s" podCreationTimestamp="2026-02-16 10:10:54 +0000 UTC" firstStartedPulling="2026-02-16 10:10:55.128653115 +0000 UTC m=+1512.821809295" lastFinishedPulling="2026-02-16 10:10:57.739860891 +0000 UTC m=+1515.433017091" observedRunningTime="2026-02-16 10:10:59.008691302 +0000 UTC m=+1516.701847492" watchObservedRunningTime="2026-02-16 10:10:59.018741683 +0000 UTC m=+1516.711897863" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.140934 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.144117 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.152564 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.431375 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.539278 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnlrx\" (UniqueName: \"kubernetes.io/projected/54488708-2f13-4ecc-a7a3-fb7372dc39ee-kube-api-access-gnlrx\") pod \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.539419 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-scripts\") pod \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.539585 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-combined-ca-bundle\") pod \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.539690 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-config-data\") pod \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\" (UID: \"54488708-2f13-4ecc-a7a3-fb7372dc39ee\") " Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.548778 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54488708-2f13-4ecc-a7a3-fb7372dc39ee-kube-api-access-gnlrx" (OuterVolumeSpecName: "kube-api-access-gnlrx") pod "54488708-2f13-4ecc-a7a3-fb7372dc39ee" (UID: "54488708-2f13-4ecc-a7a3-fb7372dc39ee"). InnerVolumeSpecName "kube-api-access-gnlrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.555384 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-scripts" (OuterVolumeSpecName: "scripts") pod "54488708-2f13-4ecc-a7a3-fb7372dc39ee" (UID: "54488708-2f13-4ecc-a7a3-fb7372dc39ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.572145 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-config-data" (OuterVolumeSpecName: "config-data") pod "54488708-2f13-4ecc-a7a3-fb7372dc39ee" (UID: "54488708-2f13-4ecc-a7a3-fb7372dc39ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.579374 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54488708-2f13-4ecc-a7a3-fb7372dc39ee" (UID: "54488708-2f13-4ecc-a7a3-fb7372dc39ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.643188 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.643244 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.643254 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnlrx\" (UniqueName: \"kubernetes.io/projected/54488708-2f13-4ecc-a7a3-fb7372dc39ee-kube-api-access-gnlrx\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.643265 4814 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54488708-2f13-4ecc-a7a3-fb7372dc39ee-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.993045 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6lgml" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.995749 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6lgml" event={"ID":"54488708-2f13-4ecc-a7a3-fb7372dc39ee","Type":"ContainerDied","Data":"7f6853527ae83d5ed89afc7db5b1d75fff980d379aee57df9d2fe0a4af7d804f"} Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.995811 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f6853527ae83d5ed89afc7db5b1d75fff980d379aee57df9d2fe0a4af7d804f" Feb 16 10:10:59 crc kubenswrapper[4814]: I0216 10:10:59.995944 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 10:11:00 crc kubenswrapper[4814]: I0216 10:11:00.038586 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 10:11:00 crc kubenswrapper[4814]: I0216 10:11:00.186039 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:11:00 crc kubenswrapper[4814]: I0216 10:11:00.186273 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerName="nova-api-log" containerID="cri-o://0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842" gracePeriod=30 Feb 16 10:11:00 crc kubenswrapper[4814]: I0216 10:11:00.186766 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerName="nova-api-api" containerID="cri-o://179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60" gracePeriod=30 Feb 16 10:11:00 crc kubenswrapper[4814]: I0216 10:11:00.246136 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:11:00 crc kubenswrapper[4814]: I0216 10:11:00.246375 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed" containerName="nova-scheduler-scheduler" containerID="cri-o://3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942" gracePeriod=30 Feb 16 10:11:00 crc kubenswrapper[4814]: I0216 10:11:00.264153 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:11:00 crc kubenswrapper[4814]: E0216 10:11:00.876747 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 10:11:00 crc kubenswrapper[4814]: E0216 10:11:00.879077 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 10:11:00 crc kubenswrapper[4814]: E0216 10:11:00.881629 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 10:11:00 crc kubenswrapper[4814]: E0216 10:11:00.881711 4814 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed" containerName="nova-scheduler-scheduler" Feb 16 10:11:01 crc kubenswrapper[4814]: I0216 10:11:01.004893 4814 generic.go:334] "Generic (PLEG): container finished" podID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerID="0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842" exitCode=143 Feb 16 10:11:01 crc kubenswrapper[4814]: I0216 10:11:01.006054 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df76dd96-361c-4bd3-8bcb-02b27bab9ac1","Type":"ContainerDied","Data":"0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842"} Feb 16 10:11:02 crc kubenswrapper[4814]: I0216 10:11:02.013736 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerName="nova-metadata-log" containerID="cri-o://ae57e0aee747e053e38d12c28d053adae997439f7da6837dccddd8d9a9d67c1b" gracePeriod=30 Feb 16 10:11:02 crc kubenswrapper[4814]: I0216 10:11:02.013798 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerName="nova-metadata-metadata" containerID="cri-o://a756a519661f8b29f5e4ade3a3a2ef4679cf0afc59b8fa735038f13f3e37e05d" gracePeriod=30 Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.026930 4814 generic.go:334] "Generic (PLEG): container finished" podID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerID="a756a519661f8b29f5e4ade3a3a2ef4679cf0afc59b8fa735038f13f3e37e05d" exitCode=0 Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.028475 4814 generic.go:334] "Generic (PLEG): container finished" podID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerID="ae57e0aee747e053e38d12c28d053adae997439f7da6837dccddd8d9a9d67c1b" exitCode=143 Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.028579 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6b028db-695e-4825-acd9-77ef7f1c40cc","Type":"ContainerDied","Data":"a756a519661f8b29f5e4ade3a3a2ef4679cf0afc59b8fa735038f13f3e37e05d"} Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.028662 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6b028db-695e-4825-acd9-77ef7f1c40cc","Type":"ContainerDied","Data":"ae57e0aee747e053e38d12c28d053adae997439f7da6837dccddd8d9a9d67c1b"} Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.390359 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.515823 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.542013 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6b028db-695e-4825-acd9-77ef7f1c40cc-logs" (OuterVolumeSpecName: "logs") pod "d6b028db-695e-4825-acd9-77ef7f1c40cc" (UID: "d6b028db-695e-4825-acd9-77ef7f1c40cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.542080 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b028db-695e-4825-acd9-77ef7f1c40cc-logs\") pod \"d6b028db-695e-4825-acd9-77ef7f1c40cc\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.542200 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z2qg\" (UniqueName: \"kubernetes.io/projected/d6b028db-695e-4825-acd9-77ef7f1c40cc-kube-api-access-8z2qg\") pod \"d6b028db-695e-4825-acd9-77ef7f1c40cc\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.542988 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-config-data\") pod \"d6b028db-695e-4825-acd9-77ef7f1c40cc\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.543040 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-combined-ca-bundle\") pod \"d6b028db-695e-4825-acd9-77ef7f1c40cc\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.543061 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-nova-metadata-tls-certs\") pod \"d6b028db-695e-4825-acd9-77ef7f1c40cc\" (UID: \"d6b028db-695e-4825-acd9-77ef7f1c40cc\") " Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.544188 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6b028db-695e-4825-acd9-77ef7f1c40cc-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.552705 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b028db-695e-4825-acd9-77ef7f1c40cc-kube-api-access-8z2qg" (OuterVolumeSpecName: "kube-api-access-8z2qg") pod "d6b028db-695e-4825-acd9-77ef7f1c40cc" (UID: "d6b028db-695e-4825-acd9-77ef7f1c40cc"). InnerVolumeSpecName "kube-api-access-8z2qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.586381 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-config-data" (OuterVolumeSpecName: "config-data") pod "d6b028db-695e-4825-acd9-77ef7f1c40cc" (UID: "d6b028db-695e-4825-acd9-77ef7f1c40cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.588869 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6b028db-695e-4825-acd9-77ef7f1c40cc" (UID: "d6b028db-695e-4825-acd9-77ef7f1c40cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.606483 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d6b028db-695e-4825-acd9-77ef7f1c40cc" (UID: "d6b028db-695e-4825-acd9-77ef7f1c40cc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.645103 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-logs\") pod \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.645287 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6qx4\" (UniqueName: \"kubernetes.io/projected/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-kube-api-access-w6qx4\") pod \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.645335 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-public-tls-certs\") pod \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.645417 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-combined-ca-bundle\") pod \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.645475 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-config-data\") pod \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.645578 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-internal-tls-certs\") pod \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\" (UID: \"df76dd96-361c-4bd3-8bcb-02b27bab9ac1\") " Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.645745 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-logs" (OuterVolumeSpecName: "logs") pod "df76dd96-361c-4bd3-8bcb-02b27bab9ac1" (UID: "df76dd96-361c-4bd3-8bcb-02b27bab9ac1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.648940 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-kube-api-access-w6qx4" (OuterVolumeSpecName: "kube-api-access-w6qx4") pod "df76dd96-361c-4bd3-8bcb-02b27bab9ac1" (UID: "df76dd96-361c-4bd3-8bcb-02b27bab9ac1"). InnerVolumeSpecName "kube-api-access-w6qx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.650581 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.650627 4814 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.650645 4814 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-logs\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.650662 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z2qg\" (UniqueName: \"kubernetes.io/projected/d6b028db-695e-4825-acd9-77ef7f1c40cc-kube-api-access-8z2qg\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.650673 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b028db-695e-4825-acd9-77ef7f1c40cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.673220 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-config-data" (OuterVolumeSpecName: "config-data") pod "df76dd96-361c-4bd3-8bcb-02b27bab9ac1" (UID: "df76dd96-361c-4bd3-8bcb-02b27bab9ac1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.677081 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df76dd96-361c-4bd3-8bcb-02b27bab9ac1" (UID: "df76dd96-361c-4bd3-8bcb-02b27bab9ac1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.698799 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "df76dd96-361c-4bd3-8bcb-02b27bab9ac1" (UID: "df76dd96-361c-4bd3-8bcb-02b27bab9ac1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.702422 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "df76dd96-361c-4bd3-8bcb-02b27bab9ac1" (UID: "df76dd96-361c-4bd3-8bcb-02b27bab9ac1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.752799 4814 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.752832 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6qx4\" (UniqueName: \"kubernetes.io/projected/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-kube-api-access-w6qx4\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.752872 4814 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.752883 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.752893 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df76dd96-361c-4bd3-8bcb-02b27bab9ac1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:03 crc kubenswrapper[4814]: I0216 10:11:03.993289 4814 scope.go:117] "RemoveContainer" containerID="f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2" Feb 16 10:11:03 crc kubenswrapper[4814]: E0216 10:11:03.993672 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.039519 4814 generic.go:334] "Generic (PLEG): container finished" podID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerID="179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60" exitCode=0 Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.039605 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df76dd96-361c-4bd3-8bcb-02b27bab9ac1","Type":"ContainerDied","Data":"179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60"} Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.039632 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df76dd96-361c-4bd3-8bcb-02b27bab9ac1","Type":"ContainerDied","Data":"4f15ea9fa77b84f38a02b8e0ab2964552aba1321bcd7dbba0917f07d3c8ce7b0"} Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.039648 4814 scope.go:117] "RemoveContainer" containerID="179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.039772 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.050304 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6b028db-695e-4825-acd9-77ef7f1c40cc","Type":"ContainerDied","Data":"b1065176dad9c1605f310d5c532f905b304dd76f8e2397fb4aa0f8c9b6aadb38"} Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.050478 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.091076 4814 scope.go:117] "RemoveContainer" containerID="0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.132391 4814 scope.go:117] "RemoveContainer" containerID="179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.132399 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:11:04 crc kubenswrapper[4814]: E0216 10:11:04.132957 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60\": container with ID starting with 179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60 not found: ID does not exist" containerID="179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.132998 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60"} err="failed to get container status \"179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60\": rpc error: code = NotFound desc = could not find container \"179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60\": container with ID starting with 179726553be08d317b422722dfa81f6d29d00959928bae57cdb9501aee78bb60 not found: ID does not exist" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.133337 4814 scope.go:117] "RemoveContainer" containerID="0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.163560 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.177236 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:11:04 crc kubenswrapper[4814]: E0216 10:11:04.189233 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842\": container with ID starting with 0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842 not found: ID does not exist" containerID="0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.189283 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842"} err="failed to get container status \"0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842\": rpc error: code = NotFound desc = could not find container \"0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842\": container with ID starting with 0ac70ffb3f9d1bfe3b570999772923946603446f912b6c8dd92bc07c5e40d842 not found: ID does not exist" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.189310 4814 scope.go:117] "RemoveContainer" containerID="a756a519661f8b29f5e4ade3a3a2ef4679cf0afc59b8fa735038f13f3e37e05d" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.197398 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.223519 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 10:11:04 crc kubenswrapper[4814]: E0216 10:11:04.224024 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerName="nova-api-api" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.224039 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerName="nova-api-api" Feb 16 10:11:04 crc kubenswrapper[4814]: E0216 10:11:04.224057 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54488708-2f13-4ecc-a7a3-fb7372dc39ee" containerName="nova-manage" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.224066 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="54488708-2f13-4ecc-a7a3-fb7372dc39ee" containerName="nova-manage" Feb 16 10:11:04 crc kubenswrapper[4814]: E0216 10:11:04.224079 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerName="nova-metadata-metadata" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.224085 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerName="nova-metadata-metadata" Feb 16 10:11:04 crc kubenswrapper[4814]: E0216 10:11:04.224100 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerName="nova-api-log" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.224106 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerName="nova-api-log" Feb 16 10:11:04 crc kubenswrapper[4814]: E0216 10:11:04.224121 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerName="nova-metadata-log" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.224127 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerName="nova-metadata-log" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.224305 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerName="nova-api-api" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.224319 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" containerName="nova-api-log" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.224325 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerName="nova-metadata-metadata" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.224342 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b028db-695e-4825-acd9-77ef7f1c40cc" containerName="nova-metadata-log" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.224353 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="54488708-2f13-4ecc-a7a3-fb7372dc39ee" containerName="nova-manage" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.225441 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.228816 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.229166 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.230078 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.242593 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.244811 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.248294 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.248724 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.257132 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.273974 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.279761 4814 scope.go:117] "RemoveContainer" containerID="ae57e0aee747e053e38d12c28d053adae997439f7da6837dccddd8d9a9d67c1b" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.389945 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49fb4bc5-2f98-4711-9149-e5da0a515242-logs\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.390014 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-internal-tls-certs\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.390095 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4274c26d-1a79-40ad-a0ef-9322dc9007c6-config-data\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.390123 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-config-data\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.390155 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4274c26d-1a79-40ad-a0ef-9322dc9007c6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.390177 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4274c26d-1a79-40ad-a0ef-9322dc9007c6-logs\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.390203 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.390312 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrv52\" (UniqueName: \"kubernetes.io/projected/4274c26d-1a79-40ad-a0ef-9322dc9007c6-kube-api-access-mrv52\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.390384 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-public-tls-certs\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.390698 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72lhx\" (UniqueName: \"kubernetes.io/projected/49fb4bc5-2f98-4711-9149-e5da0a515242-kube-api-access-72lhx\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.390830 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4274c26d-1a79-40ad-a0ef-9322dc9007c6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.493937 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49fb4bc5-2f98-4711-9149-e5da0a515242-logs\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.494016 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-internal-tls-certs\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.494063 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4274c26d-1a79-40ad-a0ef-9322dc9007c6-config-data\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.494086 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-config-data\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.494115 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4274c26d-1a79-40ad-a0ef-9322dc9007c6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.494134 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4274c26d-1a79-40ad-a0ef-9322dc9007c6-logs\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.494157 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.494182 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrv52\" (UniqueName: \"kubernetes.io/projected/4274c26d-1a79-40ad-a0ef-9322dc9007c6-kube-api-access-mrv52\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.494206 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-public-tls-certs\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.494272 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72lhx\" (UniqueName: \"kubernetes.io/projected/49fb4bc5-2f98-4711-9149-e5da0a515242-kube-api-access-72lhx\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.494297 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4274c26d-1a79-40ad-a0ef-9322dc9007c6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.494998 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4274c26d-1a79-40ad-a0ef-9322dc9007c6-logs\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.495379 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49fb4bc5-2f98-4711-9149-e5da0a515242-logs\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.499470 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-internal-tls-certs\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.500445 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.500962 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4274c26d-1a79-40ad-a0ef-9322dc9007c6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.501181 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4274c26d-1a79-40ad-a0ef-9322dc9007c6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.501222 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-config-data\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.504072 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4274c26d-1a79-40ad-a0ef-9322dc9007c6-config-data\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.506438 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49fb4bc5-2f98-4711-9149-e5da0a515242-public-tls-certs\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.511827 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72lhx\" (UniqueName: \"kubernetes.io/projected/49fb4bc5-2f98-4711-9149-e5da0a515242-kube-api-access-72lhx\") pod \"nova-api-0\" (UID: \"49fb4bc5-2f98-4711-9149-e5da0a515242\") " pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.511906 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrv52\" (UniqueName: \"kubernetes.io/projected/4274c26d-1a79-40ad-a0ef-9322dc9007c6-kube-api-access-mrv52\") pod \"nova-metadata-0\" (UID: \"4274c26d-1a79-40ad-a0ef-9322dc9007c6\") " pod="openstack/nova-metadata-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.584119 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 10:11:04 crc kubenswrapper[4814]: I0216 10:11:04.593055 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 10:11:05 crc kubenswrapper[4814]: I0216 10:11:05.011568 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b028db-695e-4825-acd9-77ef7f1c40cc" path="/var/lib/kubelet/pods/d6b028db-695e-4825-acd9-77ef7f1c40cc/volumes" Feb 16 10:11:05 crc kubenswrapper[4814]: I0216 10:11:05.014648 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df76dd96-361c-4bd3-8bcb-02b27bab9ac1" path="/var/lib/kubelet/pods/df76dd96-361c-4bd3-8bcb-02b27bab9ac1/volumes" Feb 16 10:11:05 crc kubenswrapper[4814]: W0216 10:11:05.098674 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4274c26d_1a79_40ad_a0ef_9322dc9007c6.slice/crio-1422ba918e09eb9c27bcee64c651158517f175ec34334d988c72de903fe46679 WatchSource:0}: Error finding container 1422ba918e09eb9c27bcee64c651158517f175ec34334d988c72de903fe46679: Status 404 returned error can't find the container with id 1422ba918e09eb9c27bcee64c651158517f175ec34334d988c72de903fe46679 Feb 16 10:11:05 crc kubenswrapper[4814]: I0216 10:11:05.101250 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 10:11:05 crc kubenswrapper[4814]: I0216 10:11:05.167867 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 10:11:05 crc kubenswrapper[4814]: W0216 10:11:05.171496 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49fb4bc5_2f98_4711_9149_e5da0a515242.slice/crio-1602eb004b056917c7683bd7a3e450e0f58fa556d4f91b555a5173e1386b163c WatchSource:0}: Error finding container 1602eb004b056917c7683bd7a3e450e0f58fa556d4f91b555a5173e1386b163c: Status 404 returned error can't find the container with id 1602eb004b056917c7683bd7a3e450e0f58fa556d4f91b555a5173e1386b163c Feb 16 10:11:05 crc kubenswrapper[4814]: E0216 10:11:05.875472 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942 is running failed: container process not found" containerID="3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 10:11:05 crc kubenswrapper[4814]: E0216 10:11:05.876212 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942 is running failed: container process not found" containerID="3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 10:11:05 crc kubenswrapper[4814]: E0216 10:11:05.876738 4814 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942 is running failed: container process not found" containerID="3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 10:11:05 crc kubenswrapper[4814]: E0216 10:11:05.876781 4814 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed" containerName="nova-scheduler-scheduler" Feb 16 10:11:05 crc kubenswrapper[4814]: I0216 10:11:05.960029 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.103755 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49fb4bc5-2f98-4711-9149-e5da0a515242","Type":"ContainerStarted","Data":"79c2d93762330d1f888ad37f81ac68c3930a128634dc21b7bf035f790921bdcf"} Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.104119 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49fb4bc5-2f98-4711-9149-e5da0a515242","Type":"ContainerStarted","Data":"0a011653a92139276cd6c2654c00971fdd2b4afa3ec013f6c8d087b0d9e69538"} Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.104137 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"49fb4bc5-2f98-4711-9149-e5da0a515242","Type":"ContainerStarted","Data":"1602eb004b056917c7683bd7a3e450e0f58fa556d4f91b555a5173e1386b163c"} Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.106990 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4274c26d-1a79-40ad-a0ef-9322dc9007c6","Type":"ContainerStarted","Data":"866b15dfab134b8c0ea6afb3e7406c3e414ce2476127ce12a3d546a70c7b06c2"} Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.107049 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4274c26d-1a79-40ad-a0ef-9322dc9007c6","Type":"ContainerStarted","Data":"0a4df6dbbdefcc64596cb0cc103ade6461ce2294b5b220d00cbccff63aafe7ed"} Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.107068 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4274c26d-1a79-40ad-a0ef-9322dc9007c6","Type":"ContainerStarted","Data":"1422ba918e09eb9c27bcee64c651158517f175ec34334d988c72de903fe46679"} Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.109172 4814 generic.go:334] "Generic (PLEG): container finished" podID="323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed" containerID="3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942" exitCode=0 Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.109198 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.109215 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed","Type":"ContainerDied","Data":"3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942"} Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.109238 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed","Type":"ContainerDied","Data":"1813c8f7a39e9972e9480fcf194bb3cbe55869af8ecae1dc0b4e2ffee8c6d5b1"} Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.109274 4814 scope.go:117] "RemoveContainer" containerID="3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.135272 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn5fh\" (UniqueName: \"kubernetes.io/projected/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-kube-api-access-bn5fh\") pod \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.135340 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-config-data\") pod \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.135673 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-combined-ca-bundle\") pod \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\" (UID: \"323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed\") " Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.133636 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.13359072 podStartE2EDuration="2.13359072s" podCreationTimestamp="2026-02-16 10:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:11:06.127472834 +0000 UTC m=+1523.820629024" watchObservedRunningTime="2026-02-16 10:11:06.13359072 +0000 UTC m=+1523.826746900" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.145681 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-kube-api-access-bn5fh" (OuterVolumeSpecName: "kube-api-access-bn5fh") pod "323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed" (UID: "323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed"). InnerVolumeSpecName "kube-api-access-bn5fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.168327 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.168301715 podStartE2EDuration="2.168301715s" podCreationTimestamp="2026-02-16 10:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:11:06.156155328 +0000 UTC m=+1523.849311538" watchObservedRunningTime="2026-02-16 10:11:06.168301715 +0000 UTC m=+1523.861457895" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.174201 4814 scope.go:117] "RemoveContainer" containerID="3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942" Feb 16 10:11:06 crc kubenswrapper[4814]: E0216 10:11:06.174555 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942\": container with ID starting with 3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942 not found: ID does not exist" containerID="3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.174587 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942"} err="failed to get container status \"3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942\": rpc error: code = NotFound desc = could not find container \"3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942\": container with ID starting with 3469304191e824e580e2a3498e2d9385636c0f50630cba8c48d7f78531ad4942 not found: ID does not exist" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.186377 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-config-data" (OuterVolumeSpecName: "config-data") pod "323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed" (UID: "323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.186439 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed" (UID: "323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.239035 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn5fh\" (UniqueName: \"kubernetes.io/projected/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-kube-api-access-bn5fh\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.239079 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.239091 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.454847 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.471245 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.487333 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:11:06 crc kubenswrapper[4814]: E0216 10:11:06.487957 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed" containerName="nova-scheduler-scheduler" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.487986 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed" containerName="nova-scheduler-scheduler" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.488216 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed" containerName="nova-scheduler-scheduler" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.489028 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.495251 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.501078 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.653591 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd72e1b-1b70-4e89-84eb-751cca377954-config-data\") pod \"nova-scheduler-0\" (UID: \"9dd72e1b-1b70-4e89-84eb-751cca377954\") " pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.653847 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd72e1b-1b70-4e89-84eb-751cca377954-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9dd72e1b-1b70-4e89-84eb-751cca377954\") " pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.653884 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvnw2\" (UniqueName: \"kubernetes.io/projected/9dd72e1b-1b70-4e89-84eb-751cca377954-kube-api-access-fvnw2\") pod \"nova-scheduler-0\" (UID: \"9dd72e1b-1b70-4e89-84eb-751cca377954\") " pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.756274 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd72e1b-1b70-4e89-84eb-751cca377954-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9dd72e1b-1b70-4e89-84eb-751cca377954\") " pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.756318 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvnw2\" (UniqueName: \"kubernetes.io/projected/9dd72e1b-1b70-4e89-84eb-751cca377954-kube-api-access-fvnw2\") pod \"nova-scheduler-0\" (UID: \"9dd72e1b-1b70-4e89-84eb-751cca377954\") " pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.756407 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd72e1b-1b70-4e89-84eb-751cca377954-config-data\") pod \"nova-scheduler-0\" (UID: \"9dd72e1b-1b70-4e89-84eb-751cca377954\") " pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.761844 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dd72e1b-1b70-4e89-84eb-751cca377954-config-data\") pod \"nova-scheduler-0\" (UID: \"9dd72e1b-1b70-4e89-84eb-751cca377954\") " pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.768351 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd72e1b-1b70-4e89-84eb-751cca377954-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9dd72e1b-1b70-4e89-84eb-751cca377954\") " pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.781606 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvnw2\" (UniqueName: \"kubernetes.io/projected/9dd72e1b-1b70-4e89-84eb-751cca377954-kube-api-access-fvnw2\") pod \"nova-scheduler-0\" (UID: \"9dd72e1b-1b70-4e89-84eb-751cca377954\") " pod="openstack/nova-scheduler-0" Feb 16 10:11:06 crc kubenswrapper[4814]: I0216 10:11:06.830375 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 10:11:07 crc kubenswrapper[4814]: I0216 10:11:07.006831 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed" path="/var/lib/kubelet/pods/323f923f-76fa-45dd-8ab7-cfd5ebb3b4ed/volumes" Feb 16 10:11:07 crc kubenswrapper[4814]: I0216 10:11:07.281436 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 10:11:08 crc kubenswrapper[4814]: I0216 10:11:08.130186 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9dd72e1b-1b70-4e89-84eb-751cca377954","Type":"ContainerStarted","Data":"18275139ca112823237fb61a4570c2056ed1c32f744a9fdc1498ffc39fd0898e"} Feb 16 10:11:08 crc kubenswrapper[4814]: I0216 10:11:08.130490 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9dd72e1b-1b70-4e89-84eb-751cca377954","Type":"ContainerStarted","Data":"76e58f12a11532bf038c4cecccb58882cb89ec6b3461f738f09351c8b9e05b06"} Feb 16 10:11:08 crc kubenswrapper[4814]: I0216 10:11:08.155310 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.15529074 podStartE2EDuration="2.15529074s" podCreationTimestamp="2026-02-16 10:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 10:11:08.149504714 +0000 UTC m=+1525.842660904" watchObservedRunningTime="2026-02-16 10:11:08.15529074 +0000 UTC m=+1525.848446910" Feb 16 10:11:09 crc kubenswrapper[4814]: I0216 10:11:09.593567 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 10:11:09 crc kubenswrapper[4814]: I0216 10:11:09.594012 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 10:11:11 crc kubenswrapper[4814]: I0216 10:11:11.830659 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 10:11:14 crc kubenswrapper[4814]: I0216 10:11:14.584779 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 10:11:14 crc kubenswrapper[4814]: I0216 10:11:14.586835 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 10:11:14 crc kubenswrapper[4814]: I0216 10:11:14.593142 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 10:11:14 crc kubenswrapper[4814]: I0216 10:11:14.593231 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 10:11:14 crc kubenswrapper[4814]: I0216 10:11:14.994116 4814 scope.go:117] "RemoveContainer" containerID="f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2" Feb 16 10:11:14 crc kubenswrapper[4814]: E0216 10:11:14.994672 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:11:15 crc kubenswrapper[4814]: I0216 10:11:15.596943 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="49fb4bc5-2f98-4711-9149-e5da0a515242" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 10:11:15 crc kubenswrapper[4814]: I0216 10:11:15.597615 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="49fb4bc5-2f98-4711-9149-e5da0a515242" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 10:11:15 crc kubenswrapper[4814]: I0216 10:11:15.608825 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4274c26d-1a79-40ad-a0ef-9322dc9007c6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 10:11:15 crc kubenswrapper[4814]: I0216 10:11:15.608825 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4274c26d-1a79-40ad-a0ef-9322dc9007c6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 10:11:16 crc kubenswrapper[4814]: I0216 10:11:16.831675 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 10:11:16 crc kubenswrapper[4814]: I0216 10:11:16.867039 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 10:11:17 crc kubenswrapper[4814]: I0216 10:11:17.281179 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 10:11:24 crc kubenswrapper[4814]: I0216 10:11:24.595688 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 10:11:24 crc kubenswrapper[4814]: I0216 10:11:24.600854 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 10:11:24 crc kubenswrapper[4814]: I0216 10:11:24.606175 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 10:11:24 crc kubenswrapper[4814]: I0216 10:11:24.608385 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 10:11:24 crc kubenswrapper[4814]: I0216 10:11:24.610690 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 10:11:24 crc kubenswrapper[4814]: I0216 10:11:24.618122 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 10:11:24 crc kubenswrapper[4814]: I0216 10:11:24.618476 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 10:11:24 crc kubenswrapper[4814]: I0216 10:11:24.666697 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 10:11:25 crc kubenswrapper[4814]: I0216 10:11:25.321001 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 10:11:25 crc kubenswrapper[4814]: I0216 10:11:25.326376 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 10:11:25 crc kubenswrapper[4814]: I0216 10:11:25.331224 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 10:11:27 crc kubenswrapper[4814]: I0216 10:11:27.993911 4814 scope.go:117] "RemoveContainer" containerID="f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2" Feb 16 10:11:27 crc kubenswrapper[4814]: E0216 10:11:27.994414 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:11:41 crc kubenswrapper[4814]: I0216 10:11:41.993613 4814 scope.go:117] "RemoveContainer" containerID="f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2" Feb 16 10:11:41 crc kubenswrapper[4814]: E0216 10:11:41.994432 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:11:54 crc kubenswrapper[4814]: I0216 10:11:54.994338 4814 scope.go:117] "RemoveContainer" containerID="f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2" Feb 16 10:11:55 crc kubenswrapper[4814]: I0216 10:11:55.696321 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b"} Feb 16 10:11:57 crc kubenswrapper[4814]: I0216 10:11:57.676827 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:11:58 crc kubenswrapper[4814]: I0216 10:11:58.732641 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" exitCode=0 Feb 16 10:11:58 crc kubenswrapper[4814]: I0216 10:11:58.732725 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b"} Feb 16 10:11:58 crc kubenswrapper[4814]: I0216 10:11:58.732999 4814 scope.go:117] "RemoveContainer" containerID="f5777fd971d6426f7aac5b8a9afacd2ddfc2b408abf38fa95dd223afeb4fb6e2" Feb 16 10:11:58 crc kubenswrapper[4814]: I0216 10:11:58.734329 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:11:58 crc kubenswrapper[4814]: E0216 10:11:58.734789 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:12:00 crc kubenswrapper[4814]: I0216 10:12:00.676995 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:12:00 crc kubenswrapper[4814]: I0216 10:12:00.677939 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:12:00 crc kubenswrapper[4814]: E0216 10:12:00.678226 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:12:02 crc kubenswrapper[4814]: I0216 10:12:02.677619 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:12:02 crc kubenswrapper[4814]: I0216 10:12:02.679140 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:12:02 crc kubenswrapper[4814]: E0216 10:12:02.679555 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:12:15 crc kubenswrapper[4814]: I0216 10:12:15.993493 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:12:15 crc kubenswrapper[4814]: E0216 10:12:15.994300 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:12:30 crc kubenswrapper[4814]: I0216 10:12:30.993601 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:12:30 crc kubenswrapper[4814]: E0216 10:12:30.994448 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:12:37 crc kubenswrapper[4814]: I0216 10:12:37.960886 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:12:37 crc kubenswrapper[4814]: I0216 10:12:37.962116 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:12:43 crc kubenswrapper[4814]: I0216 10:12:43.011365 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:12:43 crc kubenswrapper[4814]: E0216 10:12:43.012900 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:12:48 crc kubenswrapper[4814]: I0216 10:12:48.478713 4814 scope.go:117] "RemoveContainer" containerID="879693e95db7e0dd088e0c9bcae4397573218786664ad510d2f6ed4f3ab28ec5" Feb 16 10:12:48 crc kubenswrapper[4814]: I0216 10:12:48.542703 4814 scope.go:117] "RemoveContainer" containerID="03dfa1ca386eaa421b810cf699533e26e54ab704dee95d4ca344aa6802a560fe" Feb 16 10:12:48 crc kubenswrapper[4814]: I0216 10:12:48.651952 4814 scope.go:117] "RemoveContainer" containerID="dc3b1cfcec1081750a0ebdb74921aa2359f9a8690ae3b5073f48e830622fd98d" Feb 16 10:12:48 crc kubenswrapper[4814]: I0216 10:12:48.719478 4814 scope.go:117] "RemoveContainer" containerID="fd838acb92ff96fcd44703bd387aa4dd5bf24f118cc44149dad8da43497034a3" Feb 16 10:12:55 crc kubenswrapper[4814]: I0216 10:12:55.993747 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:12:55 crc kubenswrapper[4814]: E0216 10:12:55.995167 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:13:00 crc kubenswrapper[4814]: I0216 10:13:00.765126 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vcl6w"] Feb 16 10:13:00 crc kubenswrapper[4814]: I0216 10:13:00.768144 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:00 crc kubenswrapper[4814]: I0216 10:13:00.791124 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcl6w"] Feb 16 10:13:00 crc kubenswrapper[4814]: I0216 10:13:00.923292 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-utilities\") pod \"redhat-marketplace-vcl6w\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:00 crc kubenswrapper[4814]: I0216 10:13:00.923404 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-catalog-content\") pod \"redhat-marketplace-vcl6w\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:00 crc kubenswrapper[4814]: I0216 10:13:00.923434 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9ns\" (UniqueName: \"kubernetes.io/projected/ae3867df-1395-48d7-9511-86cdb1f38856-kube-api-access-5x9ns\") pod \"redhat-marketplace-vcl6w\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:01 crc kubenswrapper[4814]: I0216 10:13:01.025670 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-utilities\") pod \"redhat-marketplace-vcl6w\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:01 crc kubenswrapper[4814]: I0216 10:13:01.025752 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-catalog-content\") pod \"redhat-marketplace-vcl6w\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:01 crc kubenswrapper[4814]: I0216 10:13:01.025780 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9ns\" (UniqueName: \"kubernetes.io/projected/ae3867df-1395-48d7-9511-86cdb1f38856-kube-api-access-5x9ns\") pod \"redhat-marketplace-vcl6w\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:01 crc kubenswrapper[4814]: I0216 10:13:01.026432 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-catalog-content\") pod \"redhat-marketplace-vcl6w\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:01 crc kubenswrapper[4814]: I0216 10:13:01.026448 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-utilities\") pod \"redhat-marketplace-vcl6w\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:01 crc kubenswrapper[4814]: I0216 10:13:01.044788 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9ns\" (UniqueName: \"kubernetes.io/projected/ae3867df-1395-48d7-9511-86cdb1f38856-kube-api-access-5x9ns\") pod \"redhat-marketplace-vcl6w\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:01 crc kubenswrapper[4814]: I0216 10:13:01.094006 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:01 crc kubenswrapper[4814]: I0216 10:13:01.641823 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcl6w"] Feb 16 10:13:02 crc kubenswrapper[4814]: I0216 10:13:02.460011 4814 generic.go:334] "Generic (PLEG): container finished" podID="ae3867df-1395-48d7-9511-86cdb1f38856" containerID="d746603ea2df864e0725fa3247d206dcc347ec2d60e6b3aa65cfb020fc9a8522" exitCode=0 Feb 16 10:13:02 crc kubenswrapper[4814]: I0216 10:13:02.460067 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcl6w" event={"ID":"ae3867df-1395-48d7-9511-86cdb1f38856","Type":"ContainerDied","Data":"d746603ea2df864e0725fa3247d206dcc347ec2d60e6b3aa65cfb020fc9a8522"} Feb 16 10:13:02 crc kubenswrapper[4814]: I0216 10:13:02.460097 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcl6w" event={"ID":"ae3867df-1395-48d7-9511-86cdb1f38856","Type":"ContainerStarted","Data":"53845d9170c22839c3294e6e67963004dfc219a635107da0b6ad61392e540a11"} Feb 16 10:13:02 crc kubenswrapper[4814]: I0216 10:13:02.463394 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 10:13:03 crc kubenswrapper[4814]: I0216 10:13:03.472740 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcl6w" event={"ID":"ae3867df-1395-48d7-9511-86cdb1f38856","Type":"ContainerStarted","Data":"a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d"} Feb 16 10:13:04 crc kubenswrapper[4814]: I0216 10:13:04.484269 4814 generic.go:334] "Generic (PLEG): container finished" podID="ae3867df-1395-48d7-9511-86cdb1f38856" containerID="a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d" exitCode=0 Feb 16 10:13:04 crc kubenswrapper[4814]: I0216 10:13:04.484318 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcl6w" event={"ID":"ae3867df-1395-48d7-9511-86cdb1f38856","Type":"ContainerDied","Data":"a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d"} Feb 16 10:13:05 crc kubenswrapper[4814]: I0216 10:13:05.496064 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcl6w" event={"ID":"ae3867df-1395-48d7-9511-86cdb1f38856","Type":"ContainerStarted","Data":"c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a"} Feb 16 10:13:05 crc kubenswrapper[4814]: I0216 10:13:05.519339 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vcl6w" podStartSLOduration=2.847740686 podStartE2EDuration="5.519322138s" podCreationTimestamp="2026-02-16 10:13:00 +0000 UTC" firstStartedPulling="2026-02-16 10:13:02.463146995 +0000 UTC m=+1640.156303185" lastFinishedPulling="2026-02-16 10:13:05.134728437 +0000 UTC m=+1642.827884637" observedRunningTime="2026-02-16 10:13:05.517959982 +0000 UTC m=+1643.211116162" watchObservedRunningTime="2026-02-16 10:13:05.519322138 +0000 UTC m=+1643.212478318" Feb 16 10:13:07 crc kubenswrapper[4814]: I0216 10:13:07.960091 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:13:07 crc kubenswrapper[4814]: I0216 10:13:07.961476 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:13:10 crc kubenswrapper[4814]: I0216 10:13:10.994756 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:13:10 crc kubenswrapper[4814]: E0216 10:13:10.995601 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:13:11 crc kubenswrapper[4814]: I0216 10:13:11.095101 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:11 crc kubenswrapper[4814]: I0216 10:13:11.095183 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:11 crc kubenswrapper[4814]: I0216 10:13:11.143657 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:11 crc kubenswrapper[4814]: I0216 10:13:11.612893 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:11 crc kubenswrapper[4814]: I0216 10:13:11.674250 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcl6w"] Feb 16 10:13:13 crc kubenswrapper[4814]: I0216 10:13:13.579019 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vcl6w" podUID="ae3867df-1395-48d7-9511-86cdb1f38856" containerName="registry-server" containerID="cri-o://c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a" gracePeriod=2 Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.161201 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.267141 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-utilities\") pod \"ae3867df-1395-48d7-9511-86cdb1f38856\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.267841 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x9ns\" (UniqueName: \"kubernetes.io/projected/ae3867df-1395-48d7-9511-86cdb1f38856-kube-api-access-5x9ns\") pod \"ae3867df-1395-48d7-9511-86cdb1f38856\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.268021 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-catalog-content\") pod \"ae3867df-1395-48d7-9511-86cdb1f38856\" (UID: \"ae3867df-1395-48d7-9511-86cdb1f38856\") " Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.268117 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-utilities" (OuterVolumeSpecName: "utilities") pod "ae3867df-1395-48d7-9511-86cdb1f38856" (UID: "ae3867df-1395-48d7-9511-86cdb1f38856"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.268546 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.273974 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae3867df-1395-48d7-9511-86cdb1f38856-kube-api-access-5x9ns" (OuterVolumeSpecName: "kube-api-access-5x9ns") pod "ae3867df-1395-48d7-9511-86cdb1f38856" (UID: "ae3867df-1395-48d7-9511-86cdb1f38856"). InnerVolumeSpecName "kube-api-access-5x9ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.286551 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae3867df-1395-48d7-9511-86cdb1f38856" (UID: "ae3867df-1395-48d7-9511-86cdb1f38856"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.370841 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae3867df-1395-48d7-9511-86cdb1f38856-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.370911 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9ns\" (UniqueName: \"kubernetes.io/projected/ae3867df-1395-48d7-9511-86cdb1f38856-kube-api-access-5x9ns\") on node \"crc\" DevicePath \"\"" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.603343 4814 generic.go:334] "Generic (PLEG): container finished" podID="ae3867df-1395-48d7-9511-86cdb1f38856" containerID="c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a" exitCode=0 Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.603488 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcl6w" event={"ID":"ae3867df-1395-48d7-9511-86cdb1f38856","Type":"ContainerDied","Data":"c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a"} Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.603518 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vcl6w" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.603583 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vcl6w" event={"ID":"ae3867df-1395-48d7-9511-86cdb1f38856","Type":"ContainerDied","Data":"53845d9170c22839c3294e6e67963004dfc219a635107da0b6ad61392e540a11"} Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.603614 4814 scope.go:117] "RemoveContainer" containerID="c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.633960 4814 scope.go:117] "RemoveContainer" containerID="a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.670147 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcl6w"] Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.681165 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vcl6w"] Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.681480 4814 scope.go:117] "RemoveContainer" containerID="d746603ea2df864e0725fa3247d206dcc347ec2d60e6b3aa65cfb020fc9a8522" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.746766 4814 scope.go:117] "RemoveContainer" containerID="c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a" Feb 16 10:13:14 crc kubenswrapper[4814]: E0216 10:13:14.747909 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a\": container with ID starting with c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a not found: ID does not exist" containerID="c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.748017 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a"} err="failed to get container status \"c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a\": rpc error: code = NotFound desc = could not find container \"c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a\": container with ID starting with c1f8a1bb1eaa50ee1e12461ac04691e4a5ca2b183b456b263e47092ca2266b8a not found: ID does not exist" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.748056 4814 scope.go:117] "RemoveContainer" containerID="a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d" Feb 16 10:13:14 crc kubenswrapper[4814]: E0216 10:13:14.748554 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d\": container with ID starting with a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d not found: ID does not exist" containerID="a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.748585 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d"} err="failed to get container status \"a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d\": rpc error: code = NotFound desc = could not find container \"a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d\": container with ID starting with a5b8e901a55bdb717bf13f19af4a843e651f2d1516f3baa3b5f8b5e1221a8d3d not found: ID does not exist" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.748604 4814 scope.go:117] "RemoveContainer" containerID="d746603ea2df864e0725fa3247d206dcc347ec2d60e6b3aa65cfb020fc9a8522" Feb 16 10:13:14 crc kubenswrapper[4814]: E0216 10:13:14.749340 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d746603ea2df864e0725fa3247d206dcc347ec2d60e6b3aa65cfb020fc9a8522\": container with ID starting with d746603ea2df864e0725fa3247d206dcc347ec2d60e6b3aa65cfb020fc9a8522 not found: ID does not exist" containerID="d746603ea2df864e0725fa3247d206dcc347ec2d60e6b3aa65cfb020fc9a8522" Feb 16 10:13:14 crc kubenswrapper[4814]: I0216 10:13:14.749524 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d746603ea2df864e0725fa3247d206dcc347ec2d60e6b3aa65cfb020fc9a8522"} err="failed to get container status \"d746603ea2df864e0725fa3247d206dcc347ec2d60e6b3aa65cfb020fc9a8522\": rpc error: code = NotFound desc = could not find container \"d746603ea2df864e0725fa3247d206dcc347ec2d60e6b3aa65cfb020fc9a8522\": container with ID starting with d746603ea2df864e0725fa3247d206dcc347ec2d60e6b3aa65cfb020fc9a8522 not found: ID does not exist" Feb 16 10:13:15 crc kubenswrapper[4814]: I0216 10:13:15.208788 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae3867df-1395-48d7-9511-86cdb1f38856" path="/var/lib/kubelet/pods/ae3867df-1395-48d7-9511-86cdb1f38856/volumes" Feb 16 10:13:23 crc kubenswrapper[4814]: I0216 10:13:23.994382 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:13:23 crc kubenswrapper[4814]: E0216 10:13:23.995578 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:13:37 crc kubenswrapper[4814]: I0216 10:13:37.960598 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:13:37 crc kubenswrapper[4814]: I0216 10:13:37.961284 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:13:37 crc kubenswrapper[4814]: I0216 10:13:37.961363 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 10:13:37 crc kubenswrapper[4814]: I0216 10:13:37.962934 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 10:13:37 crc kubenswrapper[4814]: I0216 10:13:37.963101 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" gracePeriod=600 Feb 16 10:13:37 crc kubenswrapper[4814]: I0216 10:13:37.993745 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:13:37 crc kubenswrapper[4814]: E0216 10:13:37.994128 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:13:38 crc kubenswrapper[4814]: E0216 10:13:38.096302 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:13:38 crc kubenswrapper[4814]: I0216 10:13:38.929706 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" exitCode=0 Feb 16 10:13:38 crc kubenswrapper[4814]: I0216 10:13:38.930003 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40"} Feb 16 10:13:38 crc kubenswrapper[4814]: I0216 10:13:38.930270 4814 scope.go:117] "RemoveContainer" containerID="d69efd8fe9b99e84b5f788c4ef81733d235dcbd9751322ed8d1ae82ada37f8b1" Feb 16 10:13:38 crc kubenswrapper[4814]: I0216 10:13:38.932435 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:13:38 crc kubenswrapper[4814]: E0216 10:13:38.933006 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:13:48 crc kubenswrapper[4814]: I0216 10:13:48.839590 4814 scope.go:117] "RemoveContainer" containerID="c931e684eccaf441bbf7e8ff7254141e673975409d35a5c1bd1ac8b68187b239" Feb 16 10:13:48 crc kubenswrapper[4814]: I0216 10:13:48.889216 4814 scope.go:117] "RemoveContainer" containerID="1ca0e15a8c6335eba0f51179a0ef84993248736ec2aadcd570683c7ec71c8636" Feb 16 10:13:48 crc kubenswrapper[4814]: I0216 10:13:48.926734 4814 scope.go:117] "RemoveContainer" containerID="70181e14dfe49aa520a8a0cc43a4c6f8fedb72a86af76911b5adefdf67f203c3" Feb 16 10:13:53 crc kubenswrapper[4814]: I0216 10:13:53.002284 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:13:53 crc kubenswrapper[4814]: E0216 10:13:53.003346 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:13:53 crc kubenswrapper[4814]: I0216 10:13:53.003650 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:13:53 crc kubenswrapper[4814]: E0216 10:13:53.004021 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:14:04 crc kubenswrapper[4814]: I0216 10:14:04.031555 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:14:04 crc kubenswrapper[4814]: E0216 10:14:04.033271 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:14:05 crc kubenswrapper[4814]: I0216 10:14:05.994092 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:14:05 crc kubenswrapper[4814]: E0216 10:14:05.995064 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:14:16 crc kubenswrapper[4814]: I0216 10:14:16.993645 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:14:16 crc kubenswrapper[4814]: E0216 10:14:16.994702 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:14:18 crc kubenswrapper[4814]: I0216 10:14:18.995730 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:14:18 crc kubenswrapper[4814]: E0216 10:14:18.996512 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.596276 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-stcq6"] Feb 16 10:14:26 crc kubenswrapper[4814]: E0216 10:14:26.598008 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3867df-1395-48d7-9511-86cdb1f38856" containerName="extract-content" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.598032 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3867df-1395-48d7-9511-86cdb1f38856" containerName="extract-content" Feb 16 10:14:26 crc kubenswrapper[4814]: E0216 10:14:26.598058 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3867df-1395-48d7-9511-86cdb1f38856" containerName="registry-server" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.598067 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3867df-1395-48d7-9511-86cdb1f38856" containerName="registry-server" Feb 16 10:14:26 crc kubenswrapper[4814]: E0216 10:14:26.598105 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae3867df-1395-48d7-9511-86cdb1f38856" containerName="extract-utilities" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.598116 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae3867df-1395-48d7-9511-86cdb1f38856" containerName="extract-utilities" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.598391 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae3867df-1395-48d7-9511-86cdb1f38856" containerName="registry-server" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.600626 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.610019 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-stcq6"] Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.726648 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k8xx\" (UniqueName: \"kubernetes.io/projected/38053494-2c0b-4472-a88c-9e65b48d4b04-kube-api-access-4k8xx\") pod \"community-operators-stcq6\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.727338 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-catalog-content\") pod \"community-operators-stcq6\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.727676 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-utilities\") pod \"community-operators-stcq6\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.830445 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-utilities\") pod \"community-operators-stcq6\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.830623 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k8xx\" (UniqueName: \"kubernetes.io/projected/38053494-2c0b-4472-a88c-9e65b48d4b04-kube-api-access-4k8xx\") pod \"community-operators-stcq6\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.830752 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-catalog-content\") pod \"community-operators-stcq6\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.831119 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-utilities\") pod \"community-operators-stcq6\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.831198 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-catalog-content\") pod \"community-operators-stcq6\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.905748 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k8xx\" (UniqueName: \"kubernetes.io/projected/38053494-2c0b-4472-a88c-9e65b48d4b04-kube-api-access-4k8xx\") pod \"community-operators-stcq6\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:26 crc kubenswrapper[4814]: I0216 10:14:26.926246 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:27 crc kubenswrapper[4814]: I0216 10:14:27.365154 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-stcq6"] Feb 16 10:14:27 crc kubenswrapper[4814]: I0216 10:14:27.493284 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stcq6" event={"ID":"38053494-2c0b-4472-a88c-9e65b48d4b04","Type":"ContainerStarted","Data":"14940cc67cd1175d984720ae47c15b6a942814476024898e6629cf43cf582085"} Feb 16 10:14:28 crc kubenswrapper[4814]: I0216 10:14:28.507627 4814 generic.go:334] "Generic (PLEG): container finished" podID="38053494-2c0b-4472-a88c-9e65b48d4b04" containerID="5604fb9a0a80de822077fe4b9874c9ce7d195e1ceb179aad666fbececfa2e3b2" exitCode=0 Feb 16 10:14:28 crc kubenswrapper[4814]: I0216 10:14:28.507742 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stcq6" event={"ID":"38053494-2c0b-4472-a88c-9e65b48d4b04","Type":"ContainerDied","Data":"5604fb9a0a80de822077fe4b9874c9ce7d195e1ceb179aad666fbececfa2e3b2"} Feb 16 10:14:29 crc kubenswrapper[4814]: I0216 10:14:29.527117 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stcq6" event={"ID":"38053494-2c0b-4472-a88c-9e65b48d4b04","Type":"ContainerStarted","Data":"9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9"} Feb 16 10:14:30 crc kubenswrapper[4814]: I0216 10:14:30.542163 4814 generic.go:334] "Generic (PLEG): container finished" podID="38053494-2c0b-4472-a88c-9e65b48d4b04" containerID="9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9" exitCode=0 Feb 16 10:14:30 crc kubenswrapper[4814]: I0216 10:14:30.542242 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stcq6" event={"ID":"38053494-2c0b-4472-a88c-9e65b48d4b04","Type":"ContainerDied","Data":"9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9"} Feb 16 10:14:31 crc kubenswrapper[4814]: I0216 10:14:31.557441 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stcq6" event={"ID":"38053494-2c0b-4472-a88c-9e65b48d4b04","Type":"ContainerStarted","Data":"4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2"} Feb 16 10:14:31 crc kubenswrapper[4814]: I0216 10:14:31.593992 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-stcq6" podStartSLOduration=2.969960077 podStartE2EDuration="5.593948583s" podCreationTimestamp="2026-02-16 10:14:26 +0000 UTC" firstStartedPulling="2026-02-16 10:14:28.510830986 +0000 UTC m=+1726.203987196" lastFinishedPulling="2026-02-16 10:14:31.134819522 +0000 UTC m=+1728.827975702" observedRunningTime="2026-02-16 10:14:31.586036208 +0000 UTC m=+1729.279192408" watchObservedRunningTime="2026-02-16 10:14:31.593948583 +0000 UTC m=+1729.287104763" Feb 16 10:14:31 crc kubenswrapper[4814]: I0216 10:14:31.993648 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:14:31 crc kubenswrapper[4814]: E0216 10:14:31.994335 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:14:33 crc kubenswrapper[4814]: I0216 10:14:33.005083 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:14:33 crc kubenswrapper[4814]: E0216 10:14:33.005874 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:14:36 crc kubenswrapper[4814]: I0216 10:14:36.926972 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:36 crc kubenswrapper[4814]: I0216 10:14:36.927530 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:36 crc kubenswrapper[4814]: I0216 10:14:36.992448 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:37 crc kubenswrapper[4814]: I0216 10:14:37.691740 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:37 crc kubenswrapper[4814]: I0216 10:14:37.758353 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-stcq6"] Feb 16 10:14:39 crc kubenswrapper[4814]: I0216 10:14:39.645182 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-stcq6" podUID="38053494-2c0b-4472-a88c-9e65b48d4b04" containerName="registry-server" containerID="cri-o://4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2" gracePeriod=2 Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.196471 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.302271 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-utilities\") pod \"38053494-2c0b-4472-a88c-9e65b48d4b04\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.302688 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k8xx\" (UniqueName: \"kubernetes.io/projected/38053494-2c0b-4472-a88c-9e65b48d4b04-kube-api-access-4k8xx\") pod \"38053494-2c0b-4472-a88c-9e65b48d4b04\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.302890 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-catalog-content\") pod \"38053494-2c0b-4472-a88c-9e65b48d4b04\" (UID: \"38053494-2c0b-4472-a88c-9e65b48d4b04\") " Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.308596 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-utilities" (OuterVolumeSpecName: "utilities") pod "38053494-2c0b-4472-a88c-9e65b48d4b04" (UID: "38053494-2c0b-4472-a88c-9e65b48d4b04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.311939 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38053494-2c0b-4472-a88c-9e65b48d4b04-kube-api-access-4k8xx" (OuterVolumeSpecName: "kube-api-access-4k8xx") pod "38053494-2c0b-4472-a88c-9e65b48d4b04" (UID: "38053494-2c0b-4472-a88c-9e65b48d4b04"). InnerVolumeSpecName "kube-api-access-4k8xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.355352 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38053494-2c0b-4472-a88c-9e65b48d4b04" (UID: "38053494-2c0b-4472-a88c-9e65b48d4b04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.406905 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.406944 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38053494-2c0b-4472-a88c-9e65b48d4b04-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.406958 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k8xx\" (UniqueName: \"kubernetes.io/projected/38053494-2c0b-4472-a88c-9e65b48d4b04-kube-api-access-4k8xx\") on node \"crc\" DevicePath \"\"" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.664220 4814 generic.go:334] "Generic (PLEG): container finished" podID="38053494-2c0b-4472-a88c-9e65b48d4b04" containerID="4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2" exitCode=0 Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.664275 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-stcq6" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.664335 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stcq6" event={"ID":"38053494-2c0b-4472-a88c-9e65b48d4b04","Type":"ContainerDied","Data":"4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2"} Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.664421 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-stcq6" event={"ID":"38053494-2c0b-4472-a88c-9e65b48d4b04","Type":"ContainerDied","Data":"14940cc67cd1175d984720ae47c15b6a942814476024898e6629cf43cf582085"} Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.664485 4814 scope.go:117] "RemoveContainer" containerID="4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.729257 4814 scope.go:117] "RemoveContainer" containerID="9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.741949 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-stcq6"] Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.757612 4814 scope.go:117] "RemoveContainer" containerID="5604fb9a0a80de822077fe4b9874c9ce7d195e1ceb179aad666fbececfa2e3b2" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.763577 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-stcq6"] Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.855819 4814 scope.go:117] "RemoveContainer" containerID="4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2" Feb 16 10:14:40 crc kubenswrapper[4814]: E0216 10:14:40.859681 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2\": container with ID starting with 4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2 not found: ID does not exist" containerID="4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.859734 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2"} err="failed to get container status \"4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2\": rpc error: code = NotFound desc = could not find container \"4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2\": container with ID starting with 4008af3f08f87de99287c8201337e3be2bf2e0308cdc216752187f80d6ef59c2 not found: ID does not exist" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.859766 4814 scope.go:117] "RemoveContainer" containerID="9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9" Feb 16 10:14:40 crc kubenswrapper[4814]: E0216 10:14:40.860153 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9\": container with ID starting with 9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9 not found: ID does not exist" containerID="9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.860183 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9"} err="failed to get container status \"9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9\": rpc error: code = NotFound desc = could not find container \"9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9\": container with ID starting with 9f952ef567e28d797281364c43536df76110ee27d0e92fa1c6ac4f2d37e598f9 not found: ID does not exist" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.860200 4814 scope.go:117] "RemoveContainer" containerID="5604fb9a0a80de822077fe4b9874c9ce7d195e1ceb179aad666fbececfa2e3b2" Feb 16 10:14:40 crc kubenswrapper[4814]: E0216 10:14:40.860821 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5604fb9a0a80de822077fe4b9874c9ce7d195e1ceb179aad666fbececfa2e3b2\": container with ID starting with 5604fb9a0a80de822077fe4b9874c9ce7d195e1ceb179aad666fbececfa2e3b2 not found: ID does not exist" containerID="5604fb9a0a80de822077fe4b9874c9ce7d195e1ceb179aad666fbececfa2e3b2" Feb 16 10:14:40 crc kubenswrapper[4814]: I0216 10:14:40.860848 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5604fb9a0a80de822077fe4b9874c9ce7d195e1ceb179aad666fbececfa2e3b2"} err="failed to get container status \"5604fb9a0a80de822077fe4b9874c9ce7d195e1ceb179aad666fbececfa2e3b2\": rpc error: code = NotFound desc = could not find container \"5604fb9a0a80de822077fe4b9874c9ce7d195e1ceb179aad666fbececfa2e3b2\": container with ID starting with 5604fb9a0a80de822077fe4b9874c9ce7d195e1ceb179aad666fbececfa2e3b2 not found: ID does not exist" Feb 16 10:14:41 crc kubenswrapper[4814]: I0216 10:14:41.007296 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38053494-2c0b-4472-a88c-9e65b48d4b04" path="/var/lib/kubelet/pods/38053494-2c0b-4472-a88c-9e65b48d4b04/volumes" Feb 16 10:14:43 crc kubenswrapper[4814]: I0216 10:14:43.995098 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:14:44 crc kubenswrapper[4814]: I0216 10:14:44.711821 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9"} Feb 16 10:14:47 crc kubenswrapper[4814]: I0216 10:14:47.677511 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:14:47 crc kubenswrapper[4814]: I0216 10:14:47.993307 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:14:47 crc kubenswrapper[4814]: E0216 10:14:47.993675 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:14:48 crc kubenswrapper[4814]: I0216 10:14:48.798338 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" exitCode=0 Feb 16 10:14:48 crc kubenswrapper[4814]: I0216 10:14:48.799854 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9"} Feb 16 10:14:48 crc kubenswrapper[4814]: I0216 10:14:48.799984 4814 scope.go:117] "RemoveContainer" containerID="13ab6598c51952d2ee4108b2fce79559f2bd47c4d211ed1b94be5ce7908a396b" Feb 16 10:14:48 crc kubenswrapper[4814]: I0216 10:14:48.801541 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:14:48 crc kubenswrapper[4814]: E0216 10:14:48.801876 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:14:51 crc kubenswrapper[4814]: I0216 10:14:51.676905 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:14:51 crc kubenswrapper[4814]: I0216 10:14:51.678483 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:14:51 crc kubenswrapper[4814]: E0216 10:14:51.678917 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:14:52 crc kubenswrapper[4814]: I0216 10:14:52.676666 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:14:52 crc kubenswrapper[4814]: I0216 10:14:52.677827 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:14:52 crc kubenswrapper[4814]: E0216 10:14:52.678135 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:14:59 crc kubenswrapper[4814]: I0216 10:14:59.994538 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:14:59 crc kubenswrapper[4814]: E0216 10:14:59.995475 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.155921 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm"] Feb 16 10:15:00 crc kubenswrapper[4814]: E0216 10:15:00.156410 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38053494-2c0b-4472-a88c-9e65b48d4b04" containerName="extract-content" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.156427 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="38053494-2c0b-4472-a88c-9e65b48d4b04" containerName="extract-content" Feb 16 10:15:00 crc kubenswrapper[4814]: E0216 10:15:00.156459 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38053494-2c0b-4472-a88c-9e65b48d4b04" containerName="registry-server" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.156468 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="38053494-2c0b-4472-a88c-9e65b48d4b04" containerName="registry-server" Feb 16 10:15:00 crc kubenswrapper[4814]: E0216 10:15:00.156484 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38053494-2c0b-4472-a88c-9e65b48d4b04" containerName="extract-utilities" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.156492 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="38053494-2c0b-4472-a88c-9e65b48d4b04" containerName="extract-utilities" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.156756 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="38053494-2c0b-4472-a88c-9e65b48d4b04" containerName="registry-server" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.157692 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.160515 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.160825 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.182324 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm"] Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.310840 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp7mg\" (UniqueName: \"kubernetes.io/projected/93acc19d-fd99-485c-98ca-21f065258a67-kube-api-access-wp7mg\") pod \"collect-profiles-29520615-8rjlm\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.311490 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93acc19d-fd99-485c-98ca-21f065258a67-secret-volume\") pod \"collect-profiles-29520615-8rjlm\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.311701 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93acc19d-fd99-485c-98ca-21f065258a67-config-volume\") pod \"collect-profiles-29520615-8rjlm\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.414682 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp7mg\" (UniqueName: \"kubernetes.io/projected/93acc19d-fd99-485c-98ca-21f065258a67-kube-api-access-wp7mg\") pod \"collect-profiles-29520615-8rjlm\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.415198 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93acc19d-fd99-485c-98ca-21f065258a67-secret-volume\") pod \"collect-profiles-29520615-8rjlm\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.415249 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93acc19d-fd99-485c-98ca-21f065258a67-config-volume\") pod \"collect-profiles-29520615-8rjlm\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.416310 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93acc19d-fd99-485c-98ca-21f065258a67-config-volume\") pod \"collect-profiles-29520615-8rjlm\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.424705 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93acc19d-fd99-485c-98ca-21f065258a67-secret-volume\") pod \"collect-profiles-29520615-8rjlm\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.438278 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp7mg\" (UniqueName: \"kubernetes.io/projected/93acc19d-fd99-485c-98ca-21f065258a67-kube-api-access-wp7mg\") pod \"collect-profiles-29520615-8rjlm\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:00 crc kubenswrapper[4814]: I0216 10:15:00.493027 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:01 crc kubenswrapper[4814]: I0216 10:15:01.014981 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm"] Feb 16 10:15:01 crc kubenswrapper[4814]: I0216 10:15:01.970756 4814 generic.go:334] "Generic (PLEG): container finished" podID="93acc19d-fd99-485c-98ca-21f065258a67" containerID="44f603042edf0ddd7fea68a572297b2beaa64dec53d468a20f1a4c861aadcf32" exitCode=0 Feb 16 10:15:01 crc kubenswrapper[4814]: I0216 10:15:01.971367 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" event={"ID":"93acc19d-fd99-485c-98ca-21f065258a67","Type":"ContainerDied","Data":"44f603042edf0ddd7fea68a572297b2beaa64dec53d468a20f1a4c861aadcf32"} Feb 16 10:15:01 crc kubenswrapper[4814]: I0216 10:15:01.971402 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" event={"ID":"93acc19d-fd99-485c-98ca-21f065258a67","Type":"ContainerStarted","Data":"92857732132a579f013e875462d81b2be1fbf41593c325b79b16d02313baba4a"} Feb 16 10:15:03 crc kubenswrapper[4814]: I0216 10:15:03.410600 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:03 crc kubenswrapper[4814]: I0216 10:15:03.602822 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93acc19d-fd99-485c-98ca-21f065258a67-secret-volume\") pod \"93acc19d-fd99-485c-98ca-21f065258a67\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " Feb 16 10:15:03 crc kubenswrapper[4814]: I0216 10:15:03.603114 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93acc19d-fd99-485c-98ca-21f065258a67-config-volume\") pod \"93acc19d-fd99-485c-98ca-21f065258a67\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " Feb 16 10:15:03 crc kubenswrapper[4814]: I0216 10:15:03.603211 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp7mg\" (UniqueName: \"kubernetes.io/projected/93acc19d-fd99-485c-98ca-21f065258a67-kube-api-access-wp7mg\") pod \"93acc19d-fd99-485c-98ca-21f065258a67\" (UID: \"93acc19d-fd99-485c-98ca-21f065258a67\") " Feb 16 10:15:03 crc kubenswrapper[4814]: I0216 10:15:03.604132 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93acc19d-fd99-485c-98ca-21f065258a67-config-volume" (OuterVolumeSpecName: "config-volume") pod "93acc19d-fd99-485c-98ca-21f065258a67" (UID: "93acc19d-fd99-485c-98ca-21f065258a67"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:15:03 crc kubenswrapper[4814]: I0216 10:15:03.609759 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93acc19d-fd99-485c-98ca-21f065258a67-kube-api-access-wp7mg" (OuterVolumeSpecName: "kube-api-access-wp7mg") pod "93acc19d-fd99-485c-98ca-21f065258a67" (UID: "93acc19d-fd99-485c-98ca-21f065258a67"). InnerVolumeSpecName "kube-api-access-wp7mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:15:03 crc kubenswrapper[4814]: I0216 10:15:03.615671 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93acc19d-fd99-485c-98ca-21f065258a67-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "93acc19d-fd99-485c-98ca-21f065258a67" (UID: "93acc19d-fd99-485c-98ca-21f065258a67"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:15:03 crc kubenswrapper[4814]: I0216 10:15:03.705900 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93acc19d-fd99-485c-98ca-21f065258a67-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 10:15:03 crc kubenswrapper[4814]: I0216 10:15:03.705940 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp7mg\" (UniqueName: \"kubernetes.io/projected/93acc19d-fd99-485c-98ca-21f065258a67-kube-api-access-wp7mg\") on node \"crc\" DevicePath \"\"" Feb 16 10:15:03 crc kubenswrapper[4814]: I0216 10:15:03.705952 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/93acc19d-fd99-485c-98ca-21f065258a67-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 10:15:04 crc kubenswrapper[4814]: I0216 10:15:04.005447 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" event={"ID":"93acc19d-fd99-485c-98ca-21f065258a67","Type":"ContainerDied","Data":"92857732132a579f013e875462d81b2be1fbf41593c325b79b16d02313baba4a"} Feb 16 10:15:04 crc kubenswrapper[4814]: I0216 10:15:04.005804 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92857732132a579f013e875462d81b2be1fbf41593c325b79b16d02313baba4a" Feb 16 10:15:04 crc kubenswrapper[4814]: I0216 10:15:04.005896 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm" Feb 16 10:15:07 crc kubenswrapper[4814]: I0216 10:15:07.993690 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:15:07 crc kubenswrapper[4814]: E0216 10:15:07.994309 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:15:13 crc kubenswrapper[4814]: I0216 10:15:13.994856 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:15:13 crc kubenswrapper[4814]: E0216 10:15:13.996196 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:15:19 crc kubenswrapper[4814]: I0216 10:15:19.993994 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:15:19 crc kubenswrapper[4814]: E0216 10:15:19.995213 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:15:25 crc kubenswrapper[4814]: I0216 10:15:25.994245 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:15:25 crc kubenswrapper[4814]: E0216 10:15:25.995030 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:15:33 crc kubenswrapper[4814]: I0216 10:15:33.002686 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:15:33 crc kubenswrapper[4814]: E0216 10:15:33.003477 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:15:39 crc kubenswrapper[4814]: I0216 10:15:39.994018 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:15:39 crc kubenswrapper[4814]: E0216 10:15:39.995391 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:15:43 crc kubenswrapper[4814]: I0216 10:15:43.994842 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:15:43 crc kubenswrapper[4814]: E0216 10:15:43.995897 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:15:49 crc kubenswrapper[4814]: I0216 10:15:49.062433 4814 scope.go:117] "RemoveContainer" containerID="92ce7f28cc0f33a92386134ff28f2e3869e5a8e4f45a588e7c3ae90879db6ec2" Feb 16 10:15:49 crc kubenswrapper[4814]: I0216 10:15:49.111146 4814 scope.go:117] "RemoveContainer" containerID="b7289e8bf37a4138772d9d1aa091380882eba7033e96d202cabf0dd9b26a2fb0" Feb 16 10:15:49 crc kubenswrapper[4814]: I0216 10:15:49.139834 4814 scope.go:117] "RemoveContainer" containerID="a0bb101cb11e26c910eba1b56c7b21d9b621f6b2f636dfda6c5fd8385ad6c199" Feb 16 10:15:49 crc kubenswrapper[4814]: I0216 10:15:49.166178 4814 scope.go:117] "RemoveContainer" containerID="3b141facc58537ec075c0a74fe1d108c1041a874ce4cfaa8c64524f3dd395631" Feb 16 10:15:50 crc kubenswrapper[4814]: I0216 10:15:50.065994 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2vmt7"] Feb 16 10:15:50 crc kubenswrapper[4814]: I0216 10:15:50.092434 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c82c-account-create-update-56mbg"] Feb 16 10:15:50 crc kubenswrapper[4814]: I0216 10:15:50.107455 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c82c-account-create-update-56mbg"] Feb 16 10:15:50 crc kubenswrapper[4814]: I0216 10:15:50.126652 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2vmt7"] Feb 16 10:15:51 crc kubenswrapper[4814]: I0216 10:15:51.009769 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57490ed3-3fac-4ecb-84b5-1017a06e0ca9" path="/var/lib/kubelet/pods/57490ed3-3fac-4ecb-84b5-1017a06e0ca9/volumes" Feb 16 10:15:51 crc kubenswrapper[4814]: I0216 10:15:51.012141 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d69bc477-7bb4-4eb7-9598-119036f38586" path="/var/lib/kubelet/pods/d69bc477-7bb4-4eb7-9598-119036f38586/volumes" Feb 16 10:15:51 crc kubenswrapper[4814]: I0216 10:15:51.059228 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-4d74-account-create-update-mq22d"] Feb 16 10:15:51 crc kubenswrapper[4814]: I0216 10:15:51.073658 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-4d74-account-create-update-mq22d"] Feb 16 10:15:51 crc kubenswrapper[4814]: I0216 10:15:51.090646 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-l5r5d"] Feb 16 10:15:51 crc kubenswrapper[4814]: I0216 10:15:51.102775 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-l5r5d"] Feb 16 10:15:53 crc kubenswrapper[4814]: I0216 10:15:53.008031 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:15:53 crc kubenswrapper[4814]: E0216 10:15:53.008707 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:15:53 crc kubenswrapper[4814]: I0216 10:15:53.013628 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="066d5f2f-7797-41bc-850f-c4639db01b54" path="/var/lib/kubelet/pods/066d5f2f-7797-41bc-850f-c4639db01b54/volumes" Feb 16 10:15:53 crc kubenswrapper[4814]: I0216 10:15:53.014369 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d2c4883-477f-42e0-923c-48053735598f" path="/var/lib/kubelet/pods/4d2c4883-477f-42e0-923c-48053735598f/volumes" Feb 16 10:15:56 crc kubenswrapper[4814]: I0216 10:15:56.993949 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:15:56 crc kubenswrapper[4814]: E0216 10:15:56.995044 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:15:58 crc kubenswrapper[4814]: I0216 10:15:58.065909 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rqskr"] Feb 16 10:15:58 crc kubenswrapper[4814]: I0216 10:15:58.080464 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e11f-account-create-update-pl8x6"] Feb 16 10:15:58 crc kubenswrapper[4814]: I0216 10:15:58.093409 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rqskr"] Feb 16 10:15:58 crc kubenswrapper[4814]: I0216 10:15:58.102846 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e11f-account-create-update-pl8x6"] Feb 16 10:15:59 crc kubenswrapper[4814]: I0216 10:15:59.008381 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d36422e8-334d-414d-8d3f-b5a66ce72da2" path="/var/lib/kubelet/pods/d36422e8-334d-414d-8d3f-b5a66ce72da2/volumes" Feb 16 10:15:59 crc kubenswrapper[4814]: I0216 10:15:59.009771 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4cf8b58-cd5c-46a9-9513-89178d899f14" path="/var/lib/kubelet/pods/f4cf8b58-cd5c-46a9-9513-89178d899f14/volumes" Feb 16 10:15:59 crc kubenswrapper[4814]: I0216 10:15:59.049964 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6dxk8"] Feb 16 10:15:59 crc kubenswrapper[4814]: I0216 10:15:59.065005 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d31f-account-create-update-jgnvs"] Feb 16 10:15:59 crc kubenswrapper[4814]: I0216 10:15:59.078820 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6dxk8"] Feb 16 10:15:59 crc kubenswrapper[4814]: I0216 10:15:59.089624 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d31f-account-create-update-jgnvs"] Feb 16 10:16:01 crc kubenswrapper[4814]: I0216 10:16:01.013423 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08262c5b-0d62-4a80-9b03-76fc4d2297f3" path="/var/lib/kubelet/pods/08262c5b-0d62-4a80-9b03-76fc4d2297f3/volumes" Feb 16 10:16:01 crc kubenswrapper[4814]: I0216 10:16:01.015474 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e41c90-a220-49bf-ac62-9653ee282da0" path="/var/lib/kubelet/pods/15e41c90-a220-49bf-ac62-9653ee282da0/volumes" Feb 16 10:16:06 crc kubenswrapper[4814]: I0216 10:16:06.995776 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:16:06 crc kubenswrapper[4814]: E0216 10:16:06.997564 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:16:09 crc kubenswrapper[4814]: I0216 10:16:09.993863 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:16:09 crc kubenswrapper[4814]: E0216 10:16:09.995025 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:16:21 crc kubenswrapper[4814]: I0216 10:16:21.994277 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:16:21 crc kubenswrapper[4814]: E0216 10:16:21.995494 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:16:23 crc kubenswrapper[4814]: I0216 10:16:23.994378 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:16:23 crc kubenswrapper[4814]: E0216 10:16:23.995140 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:16:25 crc kubenswrapper[4814]: I0216 10:16:25.043146 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-88dqn"] Feb 16 10:16:25 crc kubenswrapper[4814]: I0216 10:16:25.052923 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-88dqn"] Feb 16 10:16:27 crc kubenswrapper[4814]: I0216 10:16:27.012123 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac41f60a-214e-4093-ae06-4491ce820f53" path="/var/lib/kubelet/pods/ac41f60a-214e-4093-ae06-4491ce820f53/volumes" Feb 16 10:16:28 crc kubenswrapper[4814]: I0216 10:16:28.040019 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a97d-account-create-update-9gppj"] Feb 16 10:16:28 crc kubenswrapper[4814]: I0216 10:16:28.053672 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-czp2g"] Feb 16 10:16:28 crc kubenswrapper[4814]: I0216 10:16:28.063968 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-7wb47"] Feb 16 10:16:28 crc kubenswrapper[4814]: I0216 10:16:28.073775 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a97d-account-create-update-9gppj"] Feb 16 10:16:28 crc kubenswrapper[4814]: I0216 10:16:28.082745 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-czp2g"] Feb 16 10:16:28 crc kubenswrapper[4814]: I0216 10:16:28.091990 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-7wb47"] Feb 16 10:16:29 crc kubenswrapper[4814]: I0216 10:16:29.013084 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2e189e-8b3c-47a6-840f-9bca1dc9a429" path="/var/lib/kubelet/pods/0b2e189e-8b3c-47a6-840f-9bca1dc9a429/volumes" Feb 16 10:16:29 crc kubenswrapper[4814]: I0216 10:16:29.014412 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="510cfe06-8c29-40c1-abb9-0290e4d93541" path="/var/lib/kubelet/pods/510cfe06-8c29-40c1-abb9-0290e4d93541/volumes" Feb 16 10:16:29 crc kubenswrapper[4814]: I0216 10:16:29.015240 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9498592a-bccd-4780-bee1-7bcf7ab10ad2" path="/var/lib/kubelet/pods/9498592a-bccd-4780-bee1-7bcf7ab10ad2/volumes" Feb 16 10:16:34 crc kubenswrapper[4814]: I0216 10:16:34.994255 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:16:34 crc kubenswrapper[4814]: E0216 10:16:34.995199 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:16:36 crc kubenswrapper[4814]: I0216 10:16:36.995425 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:16:36 crc kubenswrapper[4814]: E0216 10:16:36.996094 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:16:37 crc kubenswrapper[4814]: I0216 10:16:37.061825 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e256-account-create-update-clbl4"] Feb 16 10:16:37 crc kubenswrapper[4814]: I0216 10:16:37.072967 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fn7m9"] Feb 16 10:16:37 crc kubenswrapper[4814]: I0216 10:16:37.081514 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-759b-account-create-update-xd2c6"] Feb 16 10:16:37 crc kubenswrapper[4814]: I0216 10:16:37.090904 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-e256-account-create-update-clbl4"] Feb 16 10:16:37 crc kubenswrapper[4814]: I0216 10:16:37.099403 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fn7m9"] Feb 16 10:16:37 crc kubenswrapper[4814]: I0216 10:16:37.107500 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-759b-account-create-update-xd2c6"] Feb 16 10:16:39 crc kubenswrapper[4814]: I0216 10:16:39.030444 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c27257c-47e3-46d2-9324-70c85dd9e6ed" path="/var/lib/kubelet/pods/2c27257c-47e3-46d2-9324-70c85dd9e6ed/volumes" Feb 16 10:16:39 crc kubenswrapper[4814]: I0216 10:16:39.032135 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67344a8d-c26c-483f-b974-da997583505e" path="/var/lib/kubelet/pods/67344a8d-c26c-483f-b974-da997583505e/volumes" Feb 16 10:16:39 crc kubenswrapper[4814]: I0216 10:16:39.032974 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc042a5c-d892-4056-ba5f-28fbdeac4a5e" path="/var/lib/kubelet/pods/dc042a5c-d892-4056-ba5f-28fbdeac4a5e/volumes" Feb 16 10:16:45 crc kubenswrapper[4814]: I0216 10:16:45.994336 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:16:45 crc kubenswrapper[4814]: E0216 10:16:45.995193 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:16:46 crc kubenswrapper[4814]: I0216 10:16:46.058431 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-t9fz6"] Feb 16 10:16:46 crc kubenswrapper[4814]: I0216 10:16:46.073369 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-t9fz6"] Feb 16 10:16:47 crc kubenswrapper[4814]: I0216 10:16:47.020639 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="926559f6-8c52-4fdf-913e-2f2e43c4e409" path="/var/lib/kubelet/pods/926559f6-8c52-4fdf-913e-2f2e43c4e409/volumes" Feb 16 10:16:47 crc kubenswrapper[4814]: I0216 10:16:47.038726 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-b8klg"] Feb 16 10:16:47 crc kubenswrapper[4814]: I0216 10:16:47.049820 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-b8klg"] Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.015099 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0842a785-6944-4bb8-8c72-65aa4b098128" path="/var/lib/kubelet/pods/0842a785-6944-4bb8-8c72-65aa4b098128/volumes" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.236948 4814 scope.go:117] "RemoveContainer" containerID="79550fdadfd7074925ecf632f7593d1c9b4f3229bccf55514e961b5162704f80" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.295314 4814 scope.go:117] "RemoveContainer" containerID="828e08bacf56261142c80fd0af11ca4b7d35bf37dd30201086fde931b8b62b80" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.342966 4814 scope.go:117] "RemoveContainer" containerID="00a3ca713b49e466cf756734fabf15ceed721f487084f4c90ff73e6f09882873" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.431121 4814 scope.go:117] "RemoveContainer" containerID="7180e58b05cce41ac45579d89f3adee4f78c3574c740a4e0e55aaa57a7f36d3c" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.472310 4814 scope.go:117] "RemoveContainer" containerID="0e536721c4beed0a71f0aeac25cdd776d60da37507472120bc862bae521e5507" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.541351 4814 scope.go:117] "RemoveContainer" containerID="9f00a193ae84bd53dd92b270216bb64d06b8ed3d41272c938a8315b63ae9273e" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.577020 4814 scope.go:117] "RemoveContainer" containerID="09fbf61ed9652c4a68026d9446999e4cc6ccd2c939a823c03b735fd6e8111c5c" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.616771 4814 scope.go:117] "RemoveContainer" containerID="bdac0bf1c4f3f96a8e58083e6865222f820fb309d4fa7b459bb998ce1b75da70" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.653278 4814 scope.go:117] "RemoveContainer" containerID="f2826ac76e48f667b92e969f95a7eea75a364665640ebc68b931914daa23173f" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.681144 4814 scope.go:117] "RemoveContainer" containerID="706d2cdbdf57a6736c3a7e8da5f686610a1b33c4478b6341533b0fc98c5d1184" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.705780 4814 scope.go:117] "RemoveContainer" containerID="a5805c30106ebe0ea388418561dea2fd3732034017adbc838825c1a0c52863e1" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.728239 4814 scope.go:117] "RemoveContainer" containerID="11f71749571f2988441878d39fe87babb7981192213483e497c2e55d796959e8" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.754659 4814 scope.go:117] "RemoveContainer" containerID="09e7bff9ed19c6120a19fe7f800e884cb583cded12428427bef73bfe718eea04" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.783075 4814 scope.go:117] "RemoveContainer" containerID="ba7e16c7dd5560fea3213bd4c30db64895628c58cdcc55a9ef477b79c62dd555" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.816398 4814 scope.go:117] "RemoveContainer" containerID="1b6509ca2734d4f9021cc85992c419fc68e301acbdb3faf48b528c4e8e2f5950" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.861867 4814 scope.go:117] "RemoveContainer" containerID="73e75bbae7f45019aed7ef9b3c95224eadcbced429b89546248cbd68f05d9c9f" Feb 16 10:16:49 crc kubenswrapper[4814]: I0216 10:16:49.915495 4814 scope.go:117] "RemoveContainer" containerID="1ff50a2aa5b814bf432baf20cc7d53aad76ed22c0e4fc31d3537ab7640253902" Feb 16 10:16:51 crc kubenswrapper[4814]: I0216 10:16:51.993766 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:16:51 crc kubenswrapper[4814]: E0216 10:16:51.994768 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:16:55 crc kubenswrapper[4814]: I0216 10:16:55.071352 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9znsh"] Feb 16 10:16:55 crc kubenswrapper[4814]: I0216 10:16:55.089868 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9znsh"] Feb 16 10:16:56 crc kubenswrapper[4814]: I0216 10:16:56.995465 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:16:56 crc kubenswrapper[4814]: E0216 10:16:56.996114 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:16:57 crc kubenswrapper[4814]: I0216 10:16:57.015955 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="332682c6-8779-42d6-8445-1be863b81659" path="/var/lib/kubelet/pods/332682c6-8779-42d6-8445-1be863b81659/volumes" Feb 16 10:17:04 crc kubenswrapper[4814]: I0216 10:17:04.994625 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:17:04 crc kubenswrapper[4814]: E0216 10:17:04.997652 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:17:08 crc kubenswrapper[4814]: I0216 10:17:08.993515 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:17:08 crc kubenswrapper[4814]: E0216 10:17:08.994427 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:17:16 crc kubenswrapper[4814]: I0216 10:17:16.994844 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:17:16 crc kubenswrapper[4814]: E0216 10:17:16.997834 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.735757 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5fx5z"] Feb 16 10:17:18 crc kubenswrapper[4814]: E0216 10:17:18.736440 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93acc19d-fd99-485c-98ca-21f065258a67" containerName="collect-profiles" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.736453 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="93acc19d-fd99-485c-98ca-21f065258a67" containerName="collect-profiles" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.737623 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="93acc19d-fd99-485c-98ca-21f065258a67" containerName="collect-profiles" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.739122 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.755926 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fx5z"] Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.814575 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7qlf\" (UniqueName: \"kubernetes.io/projected/33a2ae69-b7fe-4c28-89e0-c61ef192d289-kube-api-access-d7qlf\") pod \"certified-operators-5fx5z\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.814900 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-utilities\") pod \"certified-operators-5fx5z\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.815228 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-catalog-content\") pod \"certified-operators-5fx5z\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.916787 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-utilities\") pod \"certified-operators-5fx5z\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.917092 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-catalog-content\") pod \"certified-operators-5fx5z\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.917145 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7qlf\" (UniqueName: \"kubernetes.io/projected/33a2ae69-b7fe-4c28-89e0-c61ef192d289-kube-api-access-d7qlf\") pod \"certified-operators-5fx5z\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.917870 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-utilities\") pod \"certified-operators-5fx5z\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.918117 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-catalog-content\") pod \"certified-operators-5fx5z\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:18 crc kubenswrapper[4814]: I0216 10:17:18.936808 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7qlf\" (UniqueName: \"kubernetes.io/projected/33a2ae69-b7fe-4c28-89e0-c61ef192d289-kube-api-access-d7qlf\") pod \"certified-operators-5fx5z\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:19 crc kubenswrapper[4814]: I0216 10:17:19.077115 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:19 crc kubenswrapper[4814]: I0216 10:17:19.619049 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5fx5z"] Feb 16 10:17:19 crc kubenswrapper[4814]: I0216 10:17:19.711885 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fx5z" event={"ID":"33a2ae69-b7fe-4c28-89e0-c61ef192d289","Type":"ContainerStarted","Data":"b3deaf2e8c00fc461f837f37c017946625a9497829cdf4e908f6cb6bcaf0acde"} Feb 16 10:17:20 crc kubenswrapper[4814]: I0216 10:17:20.727757 4814 generic.go:334] "Generic (PLEG): container finished" podID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" containerID="397c33d683f4ba9786fff67950eadb4f41b87c5da15962eb10c89467ed44ed14" exitCode=0 Feb 16 10:17:20 crc kubenswrapper[4814]: I0216 10:17:20.727888 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fx5z" event={"ID":"33a2ae69-b7fe-4c28-89e0-c61ef192d289","Type":"ContainerDied","Data":"397c33d683f4ba9786fff67950eadb4f41b87c5da15962eb10c89467ed44ed14"} Feb 16 10:17:21 crc kubenswrapper[4814]: I0216 10:17:21.744788 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fx5z" event={"ID":"33a2ae69-b7fe-4c28-89e0-c61ef192d289","Type":"ContainerStarted","Data":"feb94b8e73aa829f4f126ce7d272357bc0bfdf874c79f6659a0cd2dec31d6bf1"} Feb 16 10:17:22 crc kubenswrapper[4814]: I0216 10:17:22.762777 4814 generic.go:334] "Generic (PLEG): container finished" podID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" containerID="feb94b8e73aa829f4f126ce7d272357bc0bfdf874c79f6659a0cd2dec31d6bf1" exitCode=0 Feb 16 10:17:22 crc kubenswrapper[4814]: I0216 10:17:22.762873 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fx5z" event={"ID":"33a2ae69-b7fe-4c28-89e0-c61ef192d289","Type":"ContainerDied","Data":"feb94b8e73aa829f4f126ce7d272357bc0bfdf874c79f6659a0cd2dec31d6bf1"} Feb 16 10:17:23 crc kubenswrapper[4814]: I0216 10:17:23.007056 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:17:23 crc kubenswrapper[4814]: E0216 10:17:23.007785 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:17:23 crc kubenswrapper[4814]: I0216 10:17:23.779620 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fx5z" event={"ID":"33a2ae69-b7fe-4c28-89e0-c61ef192d289","Type":"ContainerStarted","Data":"929f690df14ee52cbc4f16deb847aebf18c04b51624052f7adbda6b8d4e965c9"} Feb 16 10:17:23 crc kubenswrapper[4814]: I0216 10:17:23.818005 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5fx5z" podStartSLOduration=3.352172133 podStartE2EDuration="5.817976217s" podCreationTimestamp="2026-02-16 10:17:18 +0000 UTC" firstStartedPulling="2026-02-16 10:17:20.731336334 +0000 UTC m=+1898.424492544" lastFinishedPulling="2026-02-16 10:17:23.197140418 +0000 UTC m=+1900.890296628" observedRunningTime="2026-02-16 10:17:23.807914524 +0000 UTC m=+1901.501070744" watchObservedRunningTime="2026-02-16 10:17:23.817976217 +0000 UTC m=+1901.511132407" Feb 16 10:17:27 crc kubenswrapper[4814]: I0216 10:17:27.994263 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:17:27 crc kubenswrapper[4814]: E0216 10:17:27.996813 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:17:29 crc kubenswrapper[4814]: I0216 10:17:29.078107 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:29 crc kubenswrapper[4814]: I0216 10:17:29.078188 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:29 crc kubenswrapper[4814]: I0216 10:17:29.155843 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:29 crc kubenswrapper[4814]: I0216 10:17:29.939913 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:30 crc kubenswrapper[4814]: I0216 10:17:30.030433 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fx5z"] Feb 16 10:17:31 crc kubenswrapper[4814]: I0216 10:17:31.886401 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5fx5z" podUID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" containerName="registry-server" containerID="cri-o://929f690df14ee52cbc4f16deb847aebf18c04b51624052f7adbda6b8d4e965c9" gracePeriod=2 Feb 16 10:17:32 crc kubenswrapper[4814]: I0216 10:17:32.900600 4814 generic.go:334] "Generic (PLEG): container finished" podID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" containerID="929f690df14ee52cbc4f16deb847aebf18c04b51624052f7adbda6b8d4e965c9" exitCode=0 Feb 16 10:17:32 crc kubenswrapper[4814]: I0216 10:17:32.900665 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fx5z" event={"ID":"33a2ae69-b7fe-4c28-89e0-c61ef192d289","Type":"ContainerDied","Data":"929f690df14ee52cbc4f16deb847aebf18c04b51624052f7adbda6b8d4e965c9"} Feb 16 10:17:32 crc kubenswrapper[4814]: I0216 10:17:32.901524 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5fx5z" event={"ID":"33a2ae69-b7fe-4c28-89e0-c61ef192d289","Type":"ContainerDied","Data":"b3deaf2e8c00fc461f837f37c017946625a9497829cdf4e908f6cb6bcaf0acde"} Feb 16 10:17:32 crc kubenswrapper[4814]: I0216 10:17:32.901564 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3deaf2e8c00fc461f837f37c017946625a9497829cdf4e908f6cb6bcaf0acde" Feb 16 10:17:32 crc kubenswrapper[4814]: I0216 10:17:32.954679 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:33 crc kubenswrapper[4814]: I0216 10:17:33.123817 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7qlf\" (UniqueName: \"kubernetes.io/projected/33a2ae69-b7fe-4c28-89e0-c61ef192d289-kube-api-access-d7qlf\") pod \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " Feb 16 10:17:33 crc kubenswrapper[4814]: I0216 10:17:33.124164 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-catalog-content\") pod \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " Feb 16 10:17:33 crc kubenswrapper[4814]: I0216 10:17:33.124252 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-utilities\") pod \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\" (UID: \"33a2ae69-b7fe-4c28-89e0-c61ef192d289\") " Feb 16 10:17:33 crc kubenswrapper[4814]: I0216 10:17:33.125189 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-utilities" (OuterVolumeSpecName: "utilities") pod "33a2ae69-b7fe-4c28-89e0-c61ef192d289" (UID: "33a2ae69-b7fe-4c28-89e0-c61ef192d289"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:17:33 crc kubenswrapper[4814]: I0216 10:17:33.125486 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:17:33 crc kubenswrapper[4814]: I0216 10:17:33.135854 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a2ae69-b7fe-4c28-89e0-c61ef192d289-kube-api-access-d7qlf" (OuterVolumeSpecName: "kube-api-access-d7qlf") pod "33a2ae69-b7fe-4c28-89e0-c61ef192d289" (UID: "33a2ae69-b7fe-4c28-89e0-c61ef192d289"). InnerVolumeSpecName "kube-api-access-d7qlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:17:33 crc kubenswrapper[4814]: I0216 10:17:33.174271 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33a2ae69-b7fe-4c28-89e0-c61ef192d289" (UID: "33a2ae69-b7fe-4c28-89e0-c61ef192d289"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:17:33 crc kubenswrapper[4814]: I0216 10:17:33.227236 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a2ae69-b7fe-4c28-89e0-c61ef192d289-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:17:33 crc kubenswrapper[4814]: I0216 10:17:33.227329 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7qlf\" (UniqueName: \"kubernetes.io/projected/33a2ae69-b7fe-4c28-89e0-c61ef192d289-kube-api-access-d7qlf\") on node \"crc\" DevicePath \"\"" Feb 16 10:17:33 crc kubenswrapper[4814]: I0216 10:17:33.917336 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5fx5z" Feb 16 10:17:33 crc kubenswrapper[4814]: I0216 10:17:33.985920 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5fx5z"] Feb 16 10:17:34 crc kubenswrapper[4814]: I0216 10:17:34.002199 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5fx5z"] Feb 16 10:17:35 crc kubenswrapper[4814]: I0216 10:17:35.014342 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" path="/var/lib/kubelet/pods/33a2ae69-b7fe-4c28-89e0-c61ef192d289/volumes" Feb 16 10:17:36 crc kubenswrapper[4814]: I0216 10:17:36.994222 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:17:36 crc kubenswrapper[4814]: E0216 10:17:36.994922 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:17:43 crc kubenswrapper[4814]: I0216 10:17:43.017866 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:17:43 crc kubenswrapper[4814]: E0216 10:17:43.019565 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:17:44 crc kubenswrapper[4814]: I0216 10:17:44.077774 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-j4vhw"] Feb 16 10:17:44 crc kubenswrapper[4814]: I0216 10:17:44.096429 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8lvv6"] Feb 16 10:17:44 crc kubenswrapper[4814]: I0216 10:17:44.110020 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8lvv6"] Feb 16 10:17:44 crc kubenswrapper[4814]: I0216 10:17:44.125689 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-j4vhw"] Feb 16 10:17:45 crc kubenswrapper[4814]: I0216 10:17:45.021722 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a350ef7d-4057-40fd-807d-5b29d2b3b465" path="/var/lib/kubelet/pods/a350ef7d-4057-40fd-807d-5b29d2b3b465/volumes" Feb 16 10:17:45 crc kubenswrapper[4814]: I0216 10:17:45.024346 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a3e754-132c-4c4e-9593-91ca3f391363" path="/var/lib/kubelet/pods/e5a3e754-132c-4c4e-9593-91ca3f391363/volumes" Feb 16 10:17:49 crc kubenswrapper[4814]: I0216 10:17:49.995086 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:17:49 crc kubenswrapper[4814]: E0216 10:17:49.998171 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:17:50 crc kubenswrapper[4814]: I0216 10:17:50.311464 4814 scope.go:117] "RemoveContainer" containerID="4d94f90ae5a3994a4186ff42d4734c084f593b9394d53811cb9ea2d928383a0b" Feb 16 10:17:50 crc kubenswrapper[4814]: I0216 10:17:50.381007 4814 scope.go:117] "RemoveContainer" containerID="c61937128bff8df80b337778f162e754deeb832e6f35f9ef72e31ab3fe7a6c2d" Feb 16 10:17:50 crc kubenswrapper[4814]: I0216 10:17:50.459300 4814 scope.go:117] "RemoveContainer" containerID="10fb5bb4f397a76c0c678ba3167391f5faf496527465b5e668eceda1dc129228" Feb 16 10:17:54 crc kubenswrapper[4814]: I0216 10:17:54.994722 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:17:54 crc kubenswrapper[4814]: E0216 10:17:54.995846 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:17:58 crc kubenswrapper[4814]: I0216 10:17:58.047247 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-47nvw"] Feb 16 10:17:58 crc kubenswrapper[4814]: I0216 10:17:58.075940 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-47nvw"] Feb 16 10:17:59 crc kubenswrapper[4814]: I0216 10:17:59.018505 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57a813a-2457-4800-8eef-a91c409659f3" path="/var/lib/kubelet/pods/e57a813a-2457-4800-8eef-a91c409659f3/volumes" Feb 16 10:18:03 crc kubenswrapper[4814]: I0216 10:18:03.003370 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:18:03 crc kubenswrapper[4814]: E0216 10:18:03.006398 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:18:03 crc kubenswrapper[4814]: I0216 10:18:03.069637 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-p4vk6"] Feb 16 10:18:03 crc kubenswrapper[4814]: I0216 10:18:03.080466 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-p4vk6"] Feb 16 10:18:05 crc kubenswrapper[4814]: I0216 10:18:05.008986 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89e3cee-9acb-4b29-ab9a-ad50616aa9d4" path="/var/lib/kubelet/pods/c89e3cee-9acb-4b29-ab9a-ad50616aa9d4/volumes" Feb 16 10:18:06 crc kubenswrapper[4814]: I0216 10:18:06.994598 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:18:06 crc kubenswrapper[4814]: E0216 10:18:06.995474 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:18:11 crc kubenswrapper[4814]: I0216 10:18:11.056673 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-chlgm"] Feb 16 10:18:11 crc kubenswrapper[4814]: I0216 10:18:11.075115 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-chlgm"] Feb 16 10:18:13 crc kubenswrapper[4814]: I0216 10:18:13.013708 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac000d0d-d120-4828-b60f-3c2e3371dc68" path="/var/lib/kubelet/pods/ac000d0d-d120-4828-b60f-3c2e3371dc68/volumes" Feb 16 10:18:15 crc kubenswrapper[4814]: I0216 10:18:15.995498 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:18:15 crc kubenswrapper[4814]: E0216 10:18:15.997063 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:18:21 crc kubenswrapper[4814]: I0216 10:18:21.995211 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:18:21 crc kubenswrapper[4814]: E0216 10:18:21.996737 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:18:28 crc kubenswrapper[4814]: I0216 10:18:28.994273 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:18:28 crc kubenswrapper[4814]: E0216 10:18:28.995772 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:18:33 crc kubenswrapper[4814]: I0216 10:18:33.994169 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:18:33 crc kubenswrapper[4814]: E0216 10:18:33.995806 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:18:43 crc kubenswrapper[4814]: I0216 10:18:43.007214 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:18:43 crc kubenswrapper[4814]: I0216 10:18:43.885217 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"84fb8c45ab9fa967bff740456f23fe5a510b837d144dd5c6e8c08439165a2633"} Feb 16 10:18:44 crc kubenswrapper[4814]: I0216 10:18:44.995976 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:18:44 crc kubenswrapper[4814]: E0216 10:18:44.996928 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:18:50 crc kubenswrapper[4814]: I0216 10:18:50.607588 4814 scope.go:117] "RemoveContainer" containerID="acbf006fee012b44f22e856025634e5be593954e8ef65de06909047c7cac5cba" Feb 16 10:18:50 crc kubenswrapper[4814]: I0216 10:18:50.668501 4814 scope.go:117] "RemoveContainer" containerID="e07ce1328e93a7aaf324f7ec49fc13b10732c980b355e901bff56ff144383dd7" Feb 16 10:18:50 crc kubenswrapper[4814]: I0216 10:18:50.726168 4814 scope.go:117] "RemoveContainer" containerID="30319370de6609a922739d32ec09f9f87f94658ae16fb92274c415dc0a46e20f" Feb 16 10:18:58 crc kubenswrapper[4814]: I0216 10:18:58.995685 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:18:58 crc kubenswrapper[4814]: E0216 10:18:58.997201 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:19:05 crc kubenswrapper[4814]: I0216 10:19:05.062950 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-l5qxl"] Feb 16 10:19:05 crc kubenswrapper[4814]: I0216 10:19:05.073726 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ce84-account-create-update-pjk5v"] Feb 16 10:19:05 crc kubenswrapper[4814]: I0216 10:19:05.082304 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8cpcd"] Feb 16 10:19:05 crc kubenswrapper[4814]: I0216 10:19:05.091166 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-12af-account-create-update-cg5n4"] Feb 16 10:19:05 crc kubenswrapper[4814]: I0216 10:19:05.103675 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d25f-account-create-update-p2nqz"] Feb 16 10:19:05 crc kubenswrapper[4814]: I0216 10:19:05.116263 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-12af-account-create-update-cg5n4"] Feb 16 10:19:05 crc kubenswrapper[4814]: I0216 10:19:05.127858 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-l5qxl"] Feb 16 10:19:05 crc kubenswrapper[4814]: I0216 10:19:05.142399 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d25f-account-create-update-p2nqz"] Feb 16 10:19:05 crc kubenswrapper[4814]: I0216 10:19:05.156553 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ce84-account-create-update-pjk5v"] Feb 16 10:19:05 crc kubenswrapper[4814]: I0216 10:19:05.167152 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8cpcd"] Feb 16 10:19:06 crc kubenswrapper[4814]: I0216 10:19:06.041659 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-g68vm"] Feb 16 10:19:06 crc kubenswrapper[4814]: I0216 10:19:06.052824 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-g68vm"] Feb 16 10:19:07 crc kubenswrapper[4814]: I0216 10:19:07.011470 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f31668d-a857-480f-b05a-fa46298ea10e" path="/var/lib/kubelet/pods/0f31668d-a857-480f-b05a-fa46298ea10e/volumes" Feb 16 10:19:07 crc kubenswrapper[4814]: I0216 10:19:07.014108 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481f2ffd-8a55-4bb8-bbac-f0862c645d53" path="/var/lib/kubelet/pods/481f2ffd-8a55-4bb8-bbac-f0862c645d53/volumes" Feb 16 10:19:07 crc kubenswrapper[4814]: I0216 10:19:07.015934 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59dfc847-309b-4f50-8d29-9418ba80cbd7" path="/var/lib/kubelet/pods/59dfc847-309b-4f50-8d29-9418ba80cbd7/volumes" Feb 16 10:19:07 crc kubenswrapper[4814]: I0216 10:19:07.017325 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a291623-9f03-4157-b461-a3ece83a7c03" path="/var/lib/kubelet/pods/9a291623-9f03-4157-b461-a3ece83a7c03/volumes" Feb 16 10:19:07 crc kubenswrapper[4814]: I0216 10:19:07.019022 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a47bbc4d-27c9-488a-814c-4223fcdc8c2c" path="/var/lib/kubelet/pods/a47bbc4d-27c9-488a-814c-4223fcdc8c2c/volumes" Feb 16 10:19:07 crc kubenswrapper[4814]: I0216 10:19:07.020933 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae29d14a-c8e0-4754-98da-720dd05df22f" path="/var/lib/kubelet/pods/ae29d14a-c8e0-4754-98da-720dd05df22f/volumes" Feb 16 10:19:13 crc kubenswrapper[4814]: I0216 10:19:13.006759 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:19:13 crc kubenswrapper[4814]: E0216 10:19:13.008962 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:19:25 crc kubenswrapper[4814]: I0216 10:19:25.993924 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:19:25 crc kubenswrapper[4814]: E0216 10:19:25.994920 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.621816 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6245f"] Feb 16 10:19:32 crc kubenswrapper[4814]: E0216 10:19:32.623568 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" containerName="registry-server" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.623605 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" containerName="registry-server" Feb 16 10:19:32 crc kubenswrapper[4814]: E0216 10:19:32.623655 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" containerName="extract-content" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.623667 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" containerName="extract-content" Feb 16 10:19:32 crc kubenswrapper[4814]: E0216 10:19:32.623706 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" containerName="extract-utilities" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.623718 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" containerName="extract-utilities" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.624044 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a2ae69-b7fe-4c28-89e0-c61ef192d289" containerName="registry-server" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.626497 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.654630 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6245f"] Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.813411 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-catalog-content\") pod \"redhat-operators-6245f\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.813488 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j6v2\" (UniqueName: \"kubernetes.io/projected/364a69ef-72d8-4b73-9002-558d3b629d11-kube-api-access-8j6v2\") pod \"redhat-operators-6245f\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.813571 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-utilities\") pod \"redhat-operators-6245f\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.917444 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-catalog-content\") pod \"redhat-operators-6245f\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.917940 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j6v2\" (UniqueName: \"kubernetes.io/projected/364a69ef-72d8-4b73-9002-558d3b629d11-kube-api-access-8j6v2\") pod \"redhat-operators-6245f\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.918069 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-utilities\") pod \"redhat-operators-6245f\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.917988 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-catalog-content\") pod \"redhat-operators-6245f\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.918606 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-utilities\") pod \"redhat-operators-6245f\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:32 crc kubenswrapper[4814]: I0216 10:19:32.959566 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j6v2\" (UniqueName: \"kubernetes.io/projected/364a69ef-72d8-4b73-9002-558d3b629d11-kube-api-access-8j6v2\") pod \"redhat-operators-6245f\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:33 crc kubenswrapper[4814]: I0216 10:19:33.254351 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:33 crc kubenswrapper[4814]: I0216 10:19:33.808748 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6245f"] Feb 16 10:19:34 crc kubenswrapper[4814]: I0216 10:19:34.613898 4814 generic.go:334] "Generic (PLEG): container finished" podID="364a69ef-72d8-4b73-9002-558d3b629d11" containerID="3482d34c170b1c48d7fd4c1569f59d6013d9f63b741be738112af3b8ded1b0a6" exitCode=0 Feb 16 10:19:34 crc kubenswrapper[4814]: I0216 10:19:34.614005 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6245f" event={"ID":"364a69ef-72d8-4b73-9002-558d3b629d11","Type":"ContainerDied","Data":"3482d34c170b1c48d7fd4c1569f59d6013d9f63b741be738112af3b8ded1b0a6"} Feb 16 10:19:34 crc kubenswrapper[4814]: I0216 10:19:34.614456 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6245f" event={"ID":"364a69ef-72d8-4b73-9002-558d3b629d11","Type":"ContainerStarted","Data":"cd5e4a5ea2f7783ae9447983b9e5e3c3192f65cd0a7319ce74771024c7cc4dcd"} Feb 16 10:19:34 crc kubenswrapper[4814]: I0216 10:19:34.617709 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 10:19:35 crc kubenswrapper[4814]: I0216 10:19:35.628411 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6245f" event={"ID":"364a69ef-72d8-4b73-9002-558d3b629d11","Type":"ContainerStarted","Data":"32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548"} Feb 16 10:19:36 crc kubenswrapper[4814]: E0216 10:19:36.585396 4814 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod364a69ef_72d8_4b73_9002_558d3b629d11.slice/crio-32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548.scope\": RecentStats: unable to find data in memory cache]" Feb 16 10:19:37 crc kubenswrapper[4814]: I0216 10:19:37.659805 4814 generic.go:334] "Generic (PLEG): container finished" podID="364a69ef-72d8-4b73-9002-558d3b629d11" containerID="32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548" exitCode=0 Feb 16 10:19:37 crc kubenswrapper[4814]: I0216 10:19:37.659863 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6245f" event={"ID":"364a69ef-72d8-4b73-9002-558d3b629d11","Type":"ContainerDied","Data":"32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548"} Feb 16 10:19:38 crc kubenswrapper[4814]: I0216 10:19:38.676841 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6245f" event={"ID":"364a69ef-72d8-4b73-9002-558d3b629d11","Type":"ContainerStarted","Data":"38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217"} Feb 16 10:19:38 crc kubenswrapper[4814]: I0216 10:19:38.708403 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6245f" podStartSLOduration=3.118745275 podStartE2EDuration="6.708336762s" podCreationTimestamp="2026-02-16 10:19:32 +0000 UTC" firstStartedPulling="2026-02-16 10:19:34.617344074 +0000 UTC m=+2032.310500254" lastFinishedPulling="2026-02-16 10:19:38.206935561 +0000 UTC m=+2035.900091741" observedRunningTime="2026-02-16 10:19:38.704093626 +0000 UTC m=+2036.397249836" watchObservedRunningTime="2026-02-16 10:19:38.708336762 +0000 UTC m=+2036.401492982" Feb 16 10:19:40 crc kubenswrapper[4814]: I0216 10:19:40.993927 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:19:40 crc kubenswrapper[4814]: E0216 10:19:40.994986 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:19:43 crc kubenswrapper[4814]: I0216 10:19:43.255000 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:43 crc kubenswrapper[4814]: I0216 10:19:43.255966 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:44 crc kubenswrapper[4814]: I0216 10:19:44.306189 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6245f" podUID="364a69ef-72d8-4b73-9002-558d3b629d11" containerName="registry-server" probeResult="failure" output=< Feb 16 10:19:44 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 10:19:44 crc kubenswrapper[4814]: > Feb 16 10:19:49 crc kubenswrapper[4814]: I0216 10:19:49.064220 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t7qmf"] Feb 16 10:19:49 crc kubenswrapper[4814]: I0216 10:19:49.075221 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t7qmf"] Feb 16 10:19:50 crc kubenswrapper[4814]: I0216 10:19:50.843031 4814 scope.go:117] "RemoveContainer" containerID="aa1947088f412c86c9e4d41e75f4670dc3ebce09e2818308c5d3362b6fa8c7fb" Feb 16 10:19:50 crc kubenswrapper[4814]: I0216 10:19:50.876037 4814 scope.go:117] "RemoveContainer" containerID="7fc144d6f269de87ecf431b69b81c00f96ac5606c279b416d72cd2a2d9d279cd" Feb 16 10:19:50 crc kubenswrapper[4814]: I0216 10:19:50.925657 4814 scope.go:117] "RemoveContainer" containerID="49f6d9c013b882a0522784f8bf08e9ac5d9d491646e6654b34dc68fadcb5b95b" Feb 16 10:19:50 crc kubenswrapper[4814]: I0216 10:19:50.972870 4814 scope.go:117] "RemoveContainer" containerID="14c1dff884c79f91621465ed2c72ba18db7ad0965d3c2934d8d2ebf1203f7939" Feb 16 10:19:51 crc kubenswrapper[4814]: I0216 10:19:51.012464 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75398570-0b03-46a0-93b7-84c92628a4d9" path="/var/lib/kubelet/pods/75398570-0b03-46a0-93b7-84c92628a4d9/volumes" Feb 16 10:19:51 crc kubenswrapper[4814]: I0216 10:19:51.032713 4814 scope.go:117] "RemoveContainer" containerID="687c02b54e05e906a8df2893690d5ca2b00e63a32dfc35086b744c5e27be5e70" Feb 16 10:19:51 crc kubenswrapper[4814]: I0216 10:19:51.078317 4814 scope.go:117] "RemoveContainer" containerID="585f5d016453ea99cc662d05ab2b20c135274d9d14c7b88add431dba1beb4299" Feb 16 10:19:53 crc kubenswrapper[4814]: I0216 10:19:53.338469 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:53 crc kubenswrapper[4814]: I0216 10:19:53.410182 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:53 crc kubenswrapper[4814]: I0216 10:19:53.590112 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6245f"] Feb 16 10:19:53 crc kubenswrapper[4814]: I0216 10:19:53.994327 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:19:54 crc kubenswrapper[4814]: I0216 10:19:54.893726 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd"} Feb 16 10:19:54 crc kubenswrapper[4814]: I0216 10:19:54.894025 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6245f" podUID="364a69ef-72d8-4b73-9002-558d3b629d11" containerName="registry-server" containerID="cri-o://38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217" gracePeriod=2 Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.371799 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.407450 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-utilities\") pod \"364a69ef-72d8-4b73-9002-558d3b629d11\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.407683 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-catalog-content\") pod \"364a69ef-72d8-4b73-9002-558d3b629d11\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.407718 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j6v2\" (UniqueName: \"kubernetes.io/projected/364a69ef-72d8-4b73-9002-558d3b629d11-kube-api-access-8j6v2\") pod \"364a69ef-72d8-4b73-9002-558d3b629d11\" (UID: \"364a69ef-72d8-4b73-9002-558d3b629d11\") " Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.408709 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-utilities" (OuterVolumeSpecName: "utilities") pod "364a69ef-72d8-4b73-9002-558d3b629d11" (UID: "364a69ef-72d8-4b73-9002-558d3b629d11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.424291 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364a69ef-72d8-4b73-9002-558d3b629d11-kube-api-access-8j6v2" (OuterVolumeSpecName: "kube-api-access-8j6v2") pod "364a69ef-72d8-4b73-9002-558d3b629d11" (UID: "364a69ef-72d8-4b73-9002-558d3b629d11"). InnerVolumeSpecName "kube-api-access-8j6v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.513079 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.513117 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j6v2\" (UniqueName: \"kubernetes.io/projected/364a69ef-72d8-4b73-9002-558d3b629d11-kube-api-access-8j6v2\") on node \"crc\" DevicePath \"\"" Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.569645 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "364a69ef-72d8-4b73-9002-558d3b629d11" (UID: "364a69ef-72d8-4b73-9002-558d3b629d11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.615521 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/364a69ef-72d8-4b73-9002-558d3b629d11-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.910950 4814 generic.go:334] "Generic (PLEG): container finished" podID="364a69ef-72d8-4b73-9002-558d3b629d11" containerID="38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217" exitCode=0 Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.911031 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6245f" event={"ID":"364a69ef-72d8-4b73-9002-558d3b629d11","Type":"ContainerDied","Data":"38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217"} Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.911474 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6245f" event={"ID":"364a69ef-72d8-4b73-9002-558d3b629d11","Type":"ContainerDied","Data":"cd5e4a5ea2f7783ae9447983b9e5e3c3192f65cd0a7319ce74771024c7cc4dcd"} Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.911507 4814 scope.go:117] "RemoveContainer" containerID="38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217" Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.911121 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6245f" Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.951731 4814 scope.go:117] "RemoveContainer" containerID="32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548" Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.958318 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6245f"] Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.984322 4814 scope.go:117] "RemoveContainer" containerID="3482d34c170b1c48d7fd4c1569f59d6013d9f63b741be738112af3b8ded1b0a6" Feb 16 10:19:55 crc kubenswrapper[4814]: I0216 10:19:55.989448 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6245f"] Feb 16 10:19:56 crc kubenswrapper[4814]: I0216 10:19:56.052705 4814 scope.go:117] "RemoveContainer" containerID="38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217" Feb 16 10:19:56 crc kubenswrapper[4814]: E0216 10:19:56.053378 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217\": container with ID starting with 38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217 not found: ID does not exist" containerID="38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217" Feb 16 10:19:56 crc kubenswrapper[4814]: I0216 10:19:56.053426 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217"} err="failed to get container status \"38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217\": rpc error: code = NotFound desc = could not find container \"38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217\": container with ID starting with 38b53b7cc2f9688cef877ef1c6c4f780a8633e020903f7d1958ed7dff2d4d217 not found: ID does not exist" Feb 16 10:19:56 crc kubenswrapper[4814]: I0216 10:19:56.053456 4814 scope.go:117] "RemoveContainer" containerID="32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548" Feb 16 10:19:56 crc kubenswrapper[4814]: E0216 10:19:56.053815 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548\": container with ID starting with 32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548 not found: ID does not exist" containerID="32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548" Feb 16 10:19:56 crc kubenswrapper[4814]: I0216 10:19:56.053845 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548"} err="failed to get container status \"32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548\": rpc error: code = NotFound desc = could not find container \"32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548\": container with ID starting with 32256e6f19432247d077f479fd712be1dae882227b258b7802baf2dc8e106548 not found: ID does not exist" Feb 16 10:19:56 crc kubenswrapper[4814]: I0216 10:19:56.053862 4814 scope.go:117] "RemoveContainer" containerID="3482d34c170b1c48d7fd4c1569f59d6013d9f63b741be738112af3b8ded1b0a6" Feb 16 10:19:56 crc kubenswrapper[4814]: E0216 10:19:56.054128 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3482d34c170b1c48d7fd4c1569f59d6013d9f63b741be738112af3b8ded1b0a6\": container with ID starting with 3482d34c170b1c48d7fd4c1569f59d6013d9f63b741be738112af3b8ded1b0a6 not found: ID does not exist" containerID="3482d34c170b1c48d7fd4c1569f59d6013d9f63b741be738112af3b8ded1b0a6" Feb 16 10:19:56 crc kubenswrapper[4814]: I0216 10:19:56.054146 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3482d34c170b1c48d7fd4c1569f59d6013d9f63b741be738112af3b8ded1b0a6"} err="failed to get container status \"3482d34c170b1c48d7fd4c1569f59d6013d9f63b741be738112af3b8ded1b0a6\": rpc error: code = NotFound desc = could not find container \"3482d34c170b1c48d7fd4c1569f59d6013d9f63b741be738112af3b8ded1b0a6\": container with ID starting with 3482d34c170b1c48d7fd4c1569f59d6013d9f63b741be738112af3b8ded1b0a6 not found: ID does not exist" Feb 16 10:19:57 crc kubenswrapper[4814]: I0216 10:19:57.014346 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364a69ef-72d8-4b73-9002-558d3b629d11" path="/var/lib/kubelet/pods/364a69ef-72d8-4b73-9002-558d3b629d11/volumes" Feb 16 10:19:57 crc kubenswrapper[4814]: I0216 10:19:57.676633 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:19:58 crc kubenswrapper[4814]: I0216 10:19:58.984093 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" exitCode=0 Feb 16 10:19:58 crc kubenswrapper[4814]: I0216 10:19:58.984198 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd"} Feb 16 10:19:58 crc kubenswrapper[4814]: I0216 10:19:58.984830 4814 scope.go:117] "RemoveContainer" containerID="de6bab537f6a5e2619ebb143dce928bef52c62bd582fb8014f359782a60d4fd9" Feb 16 10:19:58 crc kubenswrapper[4814]: I0216 10:19:58.986381 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:19:58 crc kubenswrapper[4814]: E0216 10:19:58.987166 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:20:00 crc kubenswrapper[4814]: I0216 10:20:00.676738 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:20:00 crc kubenswrapper[4814]: I0216 10:20:00.679350 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:20:00 crc kubenswrapper[4814]: E0216 10:20:00.680068 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:20:02 crc kubenswrapper[4814]: I0216 10:20:02.677140 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:20:02 crc kubenswrapper[4814]: I0216 10:20:02.679793 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:20:02 crc kubenswrapper[4814]: E0216 10:20:02.680464 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:20:13 crc kubenswrapper[4814]: I0216 10:20:13.038459 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:20:13 crc kubenswrapper[4814]: E0216 10:20:13.040088 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:20:15 crc kubenswrapper[4814]: I0216 10:20:15.089977 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xjvcb"] Feb 16 10:20:15 crc kubenswrapper[4814]: I0216 10:20:15.104264 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xjvcb"] Feb 16 10:20:17 crc kubenswrapper[4814]: I0216 10:20:17.013109 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4cc5476-cf44-45e0-877d-85494accff3c" path="/var/lib/kubelet/pods/f4cc5476-cf44-45e0-877d-85494accff3c/volumes" Feb 16 10:20:21 crc kubenswrapper[4814]: I0216 10:20:21.058183 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-grjp4"] Feb 16 10:20:21 crc kubenswrapper[4814]: I0216 10:20:21.107255 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-grjp4"] Feb 16 10:20:23 crc kubenswrapper[4814]: I0216 10:20:23.007378 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24af27d6-b9ee-4abc-b460-7633eb556cd7" path="/var/lib/kubelet/pods/24af27d6-b9ee-4abc-b460-7633eb556cd7/volumes" Feb 16 10:20:26 crc kubenswrapper[4814]: I0216 10:20:26.995665 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:20:26 crc kubenswrapper[4814]: E0216 10:20:26.997361 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:20:40 crc kubenswrapper[4814]: I0216 10:20:40.994741 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:20:40 crc kubenswrapper[4814]: E0216 10:20:40.995904 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:20:51 crc kubenswrapper[4814]: I0216 10:20:51.310796 4814 scope.go:117] "RemoveContainer" containerID="31b3173918f7f09321834fd4de5f62557cf8038e7df792e2d0e4d7d89f7bcd26" Feb 16 10:20:51 crc kubenswrapper[4814]: I0216 10:20:51.383304 4814 scope.go:117] "RemoveContainer" containerID="e05fdfec37ca4b0c4b83e8ab45db009e22805b6d5e149726ed96819c763fc9a3" Feb 16 10:20:51 crc kubenswrapper[4814]: I0216 10:20:51.457150 4814 scope.go:117] "RemoveContainer" containerID="9b17d764852f065060f7a58d165a60d709fd5339df426c0f16a6f192aed24a6a" Feb 16 10:20:54 crc kubenswrapper[4814]: I0216 10:20:54.993868 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:20:54 crc kubenswrapper[4814]: E0216 10:20:54.994810 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:21:00 crc kubenswrapper[4814]: I0216 10:21:00.098582 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6lgml"] Feb 16 10:21:00 crc kubenswrapper[4814]: I0216 10:21:00.135565 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6lgml"] Feb 16 10:21:01 crc kubenswrapper[4814]: I0216 10:21:01.014901 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54488708-2f13-4ecc-a7a3-fb7372dc39ee" path="/var/lib/kubelet/pods/54488708-2f13-4ecc-a7a3-fb7372dc39ee/volumes" Feb 16 10:21:06 crc kubenswrapper[4814]: I0216 10:21:06.994128 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:21:06 crc kubenswrapper[4814]: E0216 10:21:06.995061 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:21:07 crc kubenswrapper[4814]: I0216 10:21:07.960202 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:21:07 crc kubenswrapper[4814]: I0216 10:21:07.960771 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:21:17 crc kubenswrapper[4814]: I0216 10:21:17.994899 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:21:17 crc kubenswrapper[4814]: E0216 10:21:17.996361 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:21:30 crc kubenswrapper[4814]: I0216 10:21:30.994511 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:21:30 crc kubenswrapper[4814]: E0216 10:21:30.995221 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:21:37 crc kubenswrapper[4814]: I0216 10:21:37.960737 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:21:37 crc kubenswrapper[4814]: I0216 10:21:37.961764 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:21:45 crc kubenswrapper[4814]: I0216 10:21:45.994004 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:21:45 crc kubenswrapper[4814]: E0216 10:21:45.995116 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:21:51 crc kubenswrapper[4814]: I0216 10:21:51.635977 4814 scope.go:117] "RemoveContainer" containerID="2c90e2aa8697c3fcee437225964bfe9ec69dfea4e3df918ca42aa2ed408e7c3d" Feb 16 10:21:57 crc kubenswrapper[4814]: I0216 10:21:57.993737 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:21:57 crc kubenswrapper[4814]: E0216 10:21:57.995225 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:22:07 crc kubenswrapper[4814]: I0216 10:22:07.960867 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:22:07 crc kubenswrapper[4814]: I0216 10:22:07.961846 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:22:07 crc kubenswrapper[4814]: I0216 10:22:07.961925 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 10:22:07 crc kubenswrapper[4814]: I0216 10:22:07.963134 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84fb8c45ab9fa967bff740456f23fe5a510b837d144dd5c6e8c08439165a2633"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 10:22:07 crc kubenswrapper[4814]: I0216 10:22:07.963228 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://84fb8c45ab9fa967bff740456f23fe5a510b837d144dd5c6e8c08439165a2633" gracePeriod=600 Feb 16 10:22:08 crc kubenswrapper[4814]: I0216 10:22:08.599508 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="84fb8c45ab9fa967bff740456f23fe5a510b837d144dd5c6e8c08439165a2633" exitCode=0 Feb 16 10:22:08 crc kubenswrapper[4814]: I0216 10:22:08.599580 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"84fb8c45ab9fa967bff740456f23fe5a510b837d144dd5c6e8c08439165a2633"} Feb 16 10:22:08 crc kubenswrapper[4814]: I0216 10:22:08.600068 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5"} Feb 16 10:22:08 crc kubenswrapper[4814]: I0216 10:22:08.600091 4814 scope.go:117] "RemoveContainer" containerID="cc1e120f0638a226676fde5853727967e633c4a8ddad8b45f18b8a3e7b3b9e40" Feb 16 10:22:11 crc kubenswrapper[4814]: I0216 10:22:11.994797 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:22:11 crc kubenswrapper[4814]: E0216 10:22:11.996401 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:22:24 crc kubenswrapper[4814]: I0216 10:22:24.998424 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:22:25 crc kubenswrapper[4814]: E0216 10:22:25.000263 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:22:35 crc kubenswrapper[4814]: I0216 10:22:35.994046 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:22:35 crc kubenswrapper[4814]: E0216 10:22:35.995135 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:22:50 crc kubenswrapper[4814]: I0216 10:22:50.994889 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:22:50 crc kubenswrapper[4814]: E0216 10:22:50.996786 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:23:05 crc kubenswrapper[4814]: I0216 10:23:05.993227 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:23:05 crc kubenswrapper[4814]: E0216 10:23:05.994377 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:23:18 crc kubenswrapper[4814]: I0216 10:23:18.995226 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:23:18 crc kubenswrapper[4814]: E0216 10:23:18.996506 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:23:29 crc kubenswrapper[4814]: I0216 10:23:29.993588 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:23:29 crc kubenswrapper[4814]: E0216 10:23:29.994981 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:23:44 crc kubenswrapper[4814]: I0216 10:23:44.994675 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:23:44 crc kubenswrapper[4814]: E0216 10:23:44.995812 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:23:51 crc kubenswrapper[4814]: I0216 10:23:51.814955 4814 scope.go:117] "RemoveContainer" containerID="397c33d683f4ba9786fff67950eadb4f41b87c5da15962eb10c89467ed44ed14" Feb 16 10:23:51 crc kubenswrapper[4814]: I0216 10:23:51.859700 4814 scope.go:117] "RemoveContainer" containerID="929f690df14ee52cbc4f16deb847aebf18c04b51624052f7adbda6b8d4e965c9" Feb 16 10:23:51 crc kubenswrapper[4814]: I0216 10:23:51.906113 4814 scope.go:117] "RemoveContainer" containerID="feb94b8e73aa829f4f126ce7d272357bc0bfdf874c79f6659a0cd2dec31d6bf1" Feb 16 10:23:58 crc kubenswrapper[4814]: I0216 10:23:58.994636 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:23:58 crc kubenswrapper[4814]: E0216 10:23:58.996255 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.318592 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jpcvc"] Feb 16 10:24:07 crc kubenswrapper[4814]: E0216 10:24:07.319902 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364a69ef-72d8-4b73-9002-558d3b629d11" containerName="registry-server" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.319919 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="364a69ef-72d8-4b73-9002-558d3b629d11" containerName="registry-server" Feb 16 10:24:07 crc kubenswrapper[4814]: E0216 10:24:07.319931 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364a69ef-72d8-4b73-9002-558d3b629d11" containerName="extract-content" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.319938 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="364a69ef-72d8-4b73-9002-558d3b629d11" containerName="extract-content" Feb 16 10:24:07 crc kubenswrapper[4814]: E0216 10:24:07.319962 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364a69ef-72d8-4b73-9002-558d3b629d11" containerName="extract-utilities" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.319970 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="364a69ef-72d8-4b73-9002-558d3b629d11" containerName="extract-utilities" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.320204 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="364a69ef-72d8-4b73-9002-558d3b629d11" containerName="registry-server" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.322191 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.357479 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpcvc"] Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.459990 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l5v9\" (UniqueName: \"kubernetes.io/projected/24107045-4807-42a9-8237-cb94e87bbdc0-kube-api-access-5l5v9\") pod \"redhat-marketplace-jpcvc\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.460114 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-catalog-content\") pod \"redhat-marketplace-jpcvc\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.460517 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-utilities\") pod \"redhat-marketplace-jpcvc\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.562682 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-utilities\") pod \"redhat-marketplace-jpcvc\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.562930 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l5v9\" (UniqueName: \"kubernetes.io/projected/24107045-4807-42a9-8237-cb94e87bbdc0-kube-api-access-5l5v9\") pod \"redhat-marketplace-jpcvc\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.562986 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-catalog-content\") pod \"redhat-marketplace-jpcvc\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.563485 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-utilities\") pod \"redhat-marketplace-jpcvc\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.563642 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-catalog-content\") pod \"redhat-marketplace-jpcvc\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.586043 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l5v9\" (UniqueName: \"kubernetes.io/projected/24107045-4807-42a9-8237-cb94e87bbdc0-kube-api-access-5l5v9\") pod \"redhat-marketplace-jpcvc\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:07 crc kubenswrapper[4814]: I0216 10:24:07.650272 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:08 crc kubenswrapper[4814]: I0216 10:24:08.189737 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpcvc"] Feb 16 10:24:09 crc kubenswrapper[4814]: I0216 10:24:09.166515 4814 generic.go:334] "Generic (PLEG): container finished" podID="24107045-4807-42a9-8237-cb94e87bbdc0" containerID="b70cc66088d3481eb75731a179396288a61c45d4a5ab165557663357494c1949" exitCode=0 Feb 16 10:24:09 crc kubenswrapper[4814]: I0216 10:24:09.166574 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpcvc" event={"ID":"24107045-4807-42a9-8237-cb94e87bbdc0","Type":"ContainerDied","Data":"b70cc66088d3481eb75731a179396288a61c45d4a5ab165557663357494c1949"} Feb 16 10:24:09 crc kubenswrapper[4814]: I0216 10:24:09.167007 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpcvc" event={"ID":"24107045-4807-42a9-8237-cb94e87bbdc0","Type":"ContainerStarted","Data":"0d9bc10b853b74bb8108192b83ab134b5176a6f2a7449a28258d51df28f6966e"} Feb 16 10:24:10 crc kubenswrapper[4814]: I0216 10:24:10.182485 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpcvc" event={"ID":"24107045-4807-42a9-8237-cb94e87bbdc0","Type":"ContainerStarted","Data":"f539bb60153e4cc0aa4ab601ce544b3c74b9c189f99189625c463bb7155486a7"} Feb 16 10:24:11 crc kubenswrapper[4814]: I0216 10:24:11.199782 4814 generic.go:334] "Generic (PLEG): container finished" podID="24107045-4807-42a9-8237-cb94e87bbdc0" containerID="f539bb60153e4cc0aa4ab601ce544b3c74b9c189f99189625c463bb7155486a7" exitCode=0 Feb 16 10:24:11 crc kubenswrapper[4814]: I0216 10:24:11.199846 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpcvc" event={"ID":"24107045-4807-42a9-8237-cb94e87bbdc0","Type":"ContainerDied","Data":"f539bb60153e4cc0aa4ab601ce544b3c74b9c189f99189625c463bb7155486a7"} Feb 16 10:24:12 crc kubenswrapper[4814]: I0216 10:24:12.215566 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpcvc" event={"ID":"24107045-4807-42a9-8237-cb94e87bbdc0","Type":"ContainerStarted","Data":"c5853d9c67709fd4bcf12a47aea42d5072e84c6d4deec317dfb9bb5257b4a162"} Feb 16 10:24:12 crc kubenswrapper[4814]: I0216 10:24:12.251585 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jpcvc" podStartSLOduration=2.6641282889999998 podStartE2EDuration="5.25152134s" podCreationTimestamp="2026-02-16 10:24:07 +0000 UTC" firstStartedPulling="2026-02-16 10:24:09.168945809 +0000 UTC m=+2306.862101989" lastFinishedPulling="2026-02-16 10:24:11.75633886 +0000 UTC m=+2309.449495040" observedRunningTime="2026-02-16 10:24:12.245841996 +0000 UTC m=+2309.938998186" watchObservedRunningTime="2026-02-16 10:24:12.25152134 +0000 UTC m=+2309.944677520" Feb 16 10:24:13 crc kubenswrapper[4814]: I0216 10:24:13.014674 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:24:13 crc kubenswrapper[4814]: E0216 10:24:13.015705 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:24:17 crc kubenswrapper[4814]: I0216 10:24:17.650900 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:17 crc kubenswrapper[4814]: I0216 10:24:17.651861 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:17 crc kubenswrapper[4814]: I0216 10:24:17.703982 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:18 crc kubenswrapper[4814]: I0216 10:24:18.353733 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:18 crc kubenswrapper[4814]: I0216 10:24:18.417617 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpcvc"] Feb 16 10:24:20 crc kubenswrapper[4814]: I0216 10:24:20.299398 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jpcvc" podUID="24107045-4807-42a9-8237-cb94e87bbdc0" containerName="registry-server" containerID="cri-o://c5853d9c67709fd4bcf12a47aea42d5072e84c6d4deec317dfb9bb5257b4a162" gracePeriod=2 Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.313167 4814 generic.go:334] "Generic (PLEG): container finished" podID="24107045-4807-42a9-8237-cb94e87bbdc0" containerID="c5853d9c67709fd4bcf12a47aea42d5072e84c6d4deec317dfb9bb5257b4a162" exitCode=0 Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.313243 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpcvc" event={"ID":"24107045-4807-42a9-8237-cb94e87bbdc0","Type":"ContainerDied","Data":"c5853d9c67709fd4bcf12a47aea42d5072e84c6d4deec317dfb9bb5257b4a162"} Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.314332 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpcvc" event={"ID":"24107045-4807-42a9-8237-cb94e87bbdc0","Type":"ContainerDied","Data":"0d9bc10b853b74bb8108192b83ab134b5176a6f2a7449a28258d51df28f6966e"} Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.314359 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d9bc10b853b74bb8108192b83ab134b5176a6f2a7449a28258d51df28f6966e" Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.376862 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.425808 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-utilities\") pod \"24107045-4807-42a9-8237-cb94e87bbdc0\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.425945 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-catalog-content\") pod \"24107045-4807-42a9-8237-cb94e87bbdc0\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.426067 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l5v9\" (UniqueName: \"kubernetes.io/projected/24107045-4807-42a9-8237-cb94e87bbdc0-kube-api-access-5l5v9\") pod \"24107045-4807-42a9-8237-cb94e87bbdc0\" (UID: \"24107045-4807-42a9-8237-cb94e87bbdc0\") " Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.426819 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-utilities" (OuterVolumeSpecName: "utilities") pod "24107045-4807-42a9-8237-cb94e87bbdc0" (UID: "24107045-4807-42a9-8237-cb94e87bbdc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.434376 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24107045-4807-42a9-8237-cb94e87bbdc0-kube-api-access-5l5v9" (OuterVolumeSpecName: "kube-api-access-5l5v9") pod "24107045-4807-42a9-8237-cb94e87bbdc0" (UID: "24107045-4807-42a9-8237-cb94e87bbdc0"). InnerVolumeSpecName "kube-api-access-5l5v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.459663 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24107045-4807-42a9-8237-cb94e87bbdc0" (UID: "24107045-4807-42a9-8237-cb94e87bbdc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.529400 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.529457 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24107045-4807-42a9-8237-cb94e87bbdc0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:24:21 crc kubenswrapper[4814]: I0216 10:24:21.529481 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l5v9\" (UniqueName: \"kubernetes.io/projected/24107045-4807-42a9-8237-cb94e87bbdc0-kube-api-access-5l5v9\") on node \"crc\" DevicePath \"\"" Feb 16 10:24:22 crc kubenswrapper[4814]: I0216 10:24:22.326968 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpcvc" Feb 16 10:24:22 crc kubenswrapper[4814]: I0216 10:24:22.385332 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpcvc"] Feb 16 10:24:22 crc kubenswrapper[4814]: I0216 10:24:22.408501 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpcvc"] Feb 16 10:24:23 crc kubenswrapper[4814]: I0216 10:24:23.021055 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24107045-4807-42a9-8237-cb94e87bbdc0" path="/var/lib/kubelet/pods/24107045-4807-42a9-8237-cb94e87bbdc0/volumes" Feb 16 10:24:26 crc kubenswrapper[4814]: I0216 10:24:26.993667 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:24:26 crc kubenswrapper[4814]: E0216 10:24:26.994881 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:24:37 crc kubenswrapper[4814]: I0216 10:24:37.959962 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:24:37 crc kubenswrapper[4814]: I0216 10:24:37.962951 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:24:38 crc kubenswrapper[4814]: I0216 10:24:38.995079 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:24:38 crc kubenswrapper[4814]: E0216 10:24:38.995633 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:24:53 crc kubenswrapper[4814]: I0216 10:24:53.995366 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:24:53 crc kubenswrapper[4814]: E0216 10:24:53.997135 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:25:04 crc kubenswrapper[4814]: I0216 10:25:04.994411 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:25:05 crc kubenswrapper[4814]: I0216 10:25:05.897932 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502"} Feb 16 10:25:07 crc kubenswrapper[4814]: I0216 10:25:07.677211 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:25:07 crc kubenswrapper[4814]: I0216 10:25:07.960827 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:25:07 crc kubenswrapper[4814]: I0216 10:25:07.960969 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:25:08 crc kubenswrapper[4814]: I0216 10:25:08.946778 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" exitCode=0 Feb 16 10:25:08 crc kubenswrapper[4814]: I0216 10:25:08.947055 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502"} Feb 16 10:25:08 crc kubenswrapper[4814]: I0216 10:25:08.947265 4814 scope.go:117] "RemoveContainer" containerID="75a03de08a91b046ec2ebe98699e9779c9cabe87443d61b24524f37232bf8bdd" Feb 16 10:25:08 crc kubenswrapper[4814]: I0216 10:25:08.948383 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:25:08 crc kubenswrapper[4814]: E0216 10:25:08.948810 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:25:09 crc kubenswrapper[4814]: I0216 10:25:09.677515 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:25:09 crc kubenswrapper[4814]: I0216 10:25:09.964374 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:25:09 crc kubenswrapper[4814]: E0216 10:25:09.964716 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:25:12 crc kubenswrapper[4814]: I0216 10:25:12.677013 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:25:12 crc kubenswrapper[4814]: I0216 10:25:12.678837 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:25:12 crc kubenswrapper[4814]: E0216 10:25:12.679200 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:25:24 crc kubenswrapper[4814]: I0216 10:25:24.994643 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:25:24 crc kubenswrapper[4814]: E0216 10:25:24.995825 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:25:36 crc kubenswrapper[4814]: I0216 10:25:36.994853 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:25:36 crc kubenswrapper[4814]: E0216 10:25:36.996221 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:25:37 crc kubenswrapper[4814]: I0216 10:25:37.960271 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:25:37 crc kubenswrapper[4814]: I0216 10:25:37.960890 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:25:37 crc kubenswrapper[4814]: I0216 10:25:37.960971 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 10:25:37 crc kubenswrapper[4814]: I0216 10:25:37.962193 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 10:25:37 crc kubenswrapper[4814]: I0216 10:25:37.962309 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" gracePeriod=600 Feb 16 10:25:38 crc kubenswrapper[4814]: E0216 10:25:38.102689 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:25:38 crc kubenswrapper[4814]: I0216 10:25:38.349103 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" exitCode=0 Feb 16 10:25:38 crc kubenswrapper[4814]: I0216 10:25:38.349169 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5"} Feb 16 10:25:38 crc kubenswrapper[4814]: I0216 10:25:38.349227 4814 scope.go:117] "RemoveContainer" containerID="84fb8c45ab9fa967bff740456f23fe5a510b837d144dd5c6e8c08439165a2633" Feb 16 10:25:38 crc kubenswrapper[4814]: I0216 10:25:38.350370 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:25:38 crc kubenswrapper[4814]: E0216 10:25:38.351015 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:25:49 crc kubenswrapper[4814]: I0216 10:25:49.994416 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:25:49 crc kubenswrapper[4814]: I0216 10:25:49.995849 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:25:49 crc kubenswrapper[4814]: E0216 10:25:49.996108 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:25:49 crc kubenswrapper[4814]: E0216 10:25:49.996144 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:26:01 crc kubenswrapper[4814]: I0216 10:26:01.994264 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:26:01 crc kubenswrapper[4814]: I0216 10:26:01.995275 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:26:01 crc kubenswrapper[4814]: E0216 10:26:01.995621 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:26:01 crc kubenswrapper[4814]: E0216 10:26:01.996396 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:26:14 crc kubenswrapper[4814]: I0216 10:26:14.997340 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:26:14 crc kubenswrapper[4814]: E0216 10:26:14.998554 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:26:16 crc kubenswrapper[4814]: I0216 10:26:16.994607 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:26:16 crc kubenswrapper[4814]: E0216 10:26:16.995412 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:26:28 crc kubenswrapper[4814]: I0216 10:26:28.994197 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:26:28 crc kubenswrapper[4814]: I0216 10:26:28.995312 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:26:28 crc kubenswrapper[4814]: E0216 10:26:28.995498 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:26:28 crc kubenswrapper[4814]: E0216 10:26:28.995742 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:26:40 crc kubenswrapper[4814]: I0216 10:26:40.994149 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:26:40 crc kubenswrapper[4814]: E0216 10:26:40.994930 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:26:41 crc kubenswrapper[4814]: I0216 10:26:41.993646 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:26:41 crc kubenswrapper[4814]: E0216 10:26:41.994015 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:26:55 crc kubenswrapper[4814]: I0216 10:26:55.993095 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:26:55 crc kubenswrapper[4814]: E0216 10:26:55.994235 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:26:56 crc kubenswrapper[4814]: I0216 10:26:56.994134 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:26:56 crc kubenswrapper[4814]: E0216 10:26:56.994501 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:27:08 crc kubenswrapper[4814]: I0216 10:27:08.994576 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:27:08 crc kubenswrapper[4814]: E0216 10:27:08.995609 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:27:09 crc kubenswrapper[4814]: I0216 10:27:09.994082 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:27:09 crc kubenswrapper[4814]: E0216 10:27:09.994470 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:27:23 crc kubenswrapper[4814]: I0216 10:27:23.000244 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:27:23 crc kubenswrapper[4814]: E0216 10:27:23.001429 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:27:23 crc kubenswrapper[4814]: I0216 10:27:23.994556 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:27:23 crc kubenswrapper[4814]: E0216 10:27:23.994925 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:27:36 crc kubenswrapper[4814]: I0216 10:27:36.995015 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:27:36 crc kubenswrapper[4814]: I0216 10:27:36.995812 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:27:36 crc kubenswrapper[4814]: E0216 10:27:36.996135 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:27:36 crc kubenswrapper[4814]: E0216 10:27:36.996153 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:27:47 crc kubenswrapper[4814]: I0216 10:27:47.993394 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:27:47 crc kubenswrapper[4814]: E0216 10:27:47.994750 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:27:47 crc kubenswrapper[4814]: I0216 10:27:47.994977 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:27:47 crc kubenswrapper[4814]: E0216 10:27:47.995288 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:27:58 crc kubenswrapper[4814]: I0216 10:27:58.994958 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:27:58 crc kubenswrapper[4814]: E0216 10:27:58.995981 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:28:01 crc kubenswrapper[4814]: I0216 10:28:01.994384 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:28:01 crc kubenswrapper[4814]: E0216 10:28:01.995893 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:28:10 crc kubenswrapper[4814]: I0216 10:28:10.994674 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:28:10 crc kubenswrapper[4814]: E0216 10:28:10.995937 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.290209 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v6zt6"] Feb 16 10:28:12 crc kubenswrapper[4814]: E0216 10:28:12.291258 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24107045-4807-42a9-8237-cb94e87bbdc0" containerName="registry-server" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.291272 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="24107045-4807-42a9-8237-cb94e87bbdc0" containerName="registry-server" Feb 16 10:28:12 crc kubenswrapper[4814]: E0216 10:28:12.291306 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24107045-4807-42a9-8237-cb94e87bbdc0" containerName="extract-content" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.291314 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="24107045-4807-42a9-8237-cb94e87bbdc0" containerName="extract-content" Feb 16 10:28:12 crc kubenswrapper[4814]: E0216 10:28:12.291336 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24107045-4807-42a9-8237-cb94e87bbdc0" containerName="extract-utilities" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.291343 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="24107045-4807-42a9-8237-cb94e87bbdc0" containerName="extract-utilities" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.291530 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="24107045-4807-42a9-8237-cb94e87bbdc0" containerName="registry-server" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.293194 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.306238 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6zt6"] Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.348994 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-utilities\") pod \"certified-operators-v6zt6\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.349065 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-catalog-content\") pod \"certified-operators-v6zt6\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.349156 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdjxs\" (UniqueName: \"kubernetes.io/projected/31264d4c-8e69-4f61-9e73-57ed148403a7-kube-api-access-qdjxs\") pod \"certified-operators-v6zt6\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.451297 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-utilities\") pod \"certified-operators-v6zt6\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.451774 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-catalog-content\") pod \"certified-operators-v6zt6\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.452368 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-utilities\") pod \"certified-operators-v6zt6\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.452498 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdjxs\" (UniqueName: \"kubernetes.io/projected/31264d4c-8e69-4f61-9e73-57ed148403a7-kube-api-access-qdjxs\") pod \"certified-operators-v6zt6\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.452378 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-catalog-content\") pod \"certified-operators-v6zt6\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.484649 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdjxs\" (UniqueName: \"kubernetes.io/projected/31264d4c-8e69-4f61-9e73-57ed148403a7-kube-api-access-qdjxs\") pod \"certified-operators-v6zt6\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:12 crc kubenswrapper[4814]: I0216 10:28:12.627968 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:13 crc kubenswrapper[4814]: I0216 10:28:13.024063 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:28:13 crc kubenswrapper[4814]: E0216 10:28:13.024858 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:28:13 crc kubenswrapper[4814]: I0216 10:28:13.125788 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v6zt6"] Feb 16 10:28:13 crc kubenswrapper[4814]: I0216 10:28:13.909439 4814 generic.go:334] "Generic (PLEG): container finished" podID="31264d4c-8e69-4f61-9e73-57ed148403a7" containerID="71a8a2a9d3403a4f7f1574ebe77fc0da129357a16de82c2d2b4a221bf9e2525c" exitCode=0 Feb 16 10:28:13 crc kubenswrapper[4814]: I0216 10:28:13.909528 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6zt6" event={"ID":"31264d4c-8e69-4f61-9e73-57ed148403a7","Type":"ContainerDied","Data":"71a8a2a9d3403a4f7f1574ebe77fc0da129357a16de82c2d2b4a221bf9e2525c"} Feb 16 10:28:13 crc kubenswrapper[4814]: I0216 10:28:13.910085 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6zt6" event={"ID":"31264d4c-8e69-4f61-9e73-57ed148403a7","Type":"ContainerStarted","Data":"540305c8d17eba5ff98bbed992a75bd47b3501e01bc86144c20fb8e85e80661b"} Feb 16 10:28:13 crc kubenswrapper[4814]: I0216 10:28:13.914219 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 10:28:15 crc kubenswrapper[4814]: I0216 10:28:15.938877 4814 generic.go:334] "Generic (PLEG): container finished" podID="31264d4c-8e69-4f61-9e73-57ed148403a7" containerID="10b512bf027b148497c99e076d4d39ce5e3a7907f4891d089ee32f40e3346804" exitCode=0 Feb 16 10:28:15 crc kubenswrapper[4814]: I0216 10:28:15.939457 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6zt6" event={"ID":"31264d4c-8e69-4f61-9e73-57ed148403a7","Type":"ContainerDied","Data":"10b512bf027b148497c99e076d4d39ce5e3a7907f4891d089ee32f40e3346804"} Feb 16 10:28:16 crc kubenswrapper[4814]: I0216 10:28:16.958042 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6zt6" event={"ID":"31264d4c-8e69-4f61-9e73-57ed148403a7","Type":"ContainerStarted","Data":"ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e"} Feb 16 10:28:17 crc kubenswrapper[4814]: I0216 10:28:17.007579 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v6zt6" podStartSLOduration=2.542620182 podStartE2EDuration="5.007554413s" podCreationTimestamp="2026-02-16 10:28:12 +0000 UTC" firstStartedPulling="2026-02-16 10:28:13.913873628 +0000 UTC m=+2551.607029808" lastFinishedPulling="2026-02-16 10:28:16.378807849 +0000 UTC m=+2554.071964039" observedRunningTime="2026-02-16 10:28:16.989340807 +0000 UTC m=+2554.682497017" watchObservedRunningTime="2026-02-16 10:28:17.007554413 +0000 UTC m=+2554.700710593" Feb 16 10:28:22 crc kubenswrapper[4814]: I0216 10:28:22.628852 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:22 crc kubenswrapper[4814]: I0216 10:28:22.629423 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:22 crc kubenswrapper[4814]: I0216 10:28:22.684721 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:23 crc kubenswrapper[4814]: I0216 10:28:23.143265 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:23 crc kubenswrapper[4814]: I0216 10:28:23.209995 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v6zt6"] Feb 16 10:28:24 crc kubenswrapper[4814]: I0216 10:28:24.993948 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:28:24 crc kubenswrapper[4814]: E0216 10:28:24.994976 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:28:24 crc kubenswrapper[4814]: I0216 10:28:24.995852 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:28:24 crc kubenswrapper[4814]: E0216 10:28:24.996175 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:28:25 crc kubenswrapper[4814]: I0216 10:28:25.054209 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v6zt6" podUID="31264d4c-8e69-4f61-9e73-57ed148403a7" containerName="registry-server" containerID="cri-o://ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e" gracePeriod=2 Feb 16 10:28:25 crc kubenswrapper[4814]: I0216 10:28:25.629392 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:25 crc kubenswrapper[4814]: I0216 10:28:25.662221 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-utilities\") pod \"31264d4c-8e69-4f61-9e73-57ed148403a7\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " Feb 16 10:28:25 crc kubenswrapper[4814]: I0216 10:28:25.662299 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-catalog-content\") pod \"31264d4c-8e69-4f61-9e73-57ed148403a7\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " Feb 16 10:28:25 crc kubenswrapper[4814]: I0216 10:28:25.662588 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdjxs\" (UniqueName: \"kubernetes.io/projected/31264d4c-8e69-4f61-9e73-57ed148403a7-kube-api-access-qdjxs\") pod \"31264d4c-8e69-4f61-9e73-57ed148403a7\" (UID: \"31264d4c-8e69-4f61-9e73-57ed148403a7\") " Feb 16 10:28:25 crc kubenswrapper[4814]: I0216 10:28:25.664333 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-utilities" (OuterVolumeSpecName: "utilities") pod "31264d4c-8e69-4f61-9e73-57ed148403a7" (UID: "31264d4c-8e69-4f61-9e73-57ed148403a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:28:25 crc kubenswrapper[4814]: I0216 10:28:25.671675 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31264d4c-8e69-4f61-9e73-57ed148403a7-kube-api-access-qdjxs" (OuterVolumeSpecName: "kube-api-access-qdjxs") pod "31264d4c-8e69-4f61-9e73-57ed148403a7" (UID: "31264d4c-8e69-4f61-9e73-57ed148403a7"). InnerVolumeSpecName "kube-api-access-qdjxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:28:25 crc kubenswrapper[4814]: I0216 10:28:25.726302 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31264d4c-8e69-4f61-9e73-57ed148403a7" (UID: "31264d4c-8e69-4f61-9e73-57ed148403a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:28:25 crc kubenswrapper[4814]: I0216 10:28:25.765609 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdjxs\" (UniqueName: \"kubernetes.io/projected/31264d4c-8e69-4f61-9e73-57ed148403a7-kube-api-access-qdjxs\") on node \"crc\" DevicePath \"\"" Feb 16 10:28:25 crc kubenswrapper[4814]: I0216 10:28:25.765654 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:28:25 crc kubenswrapper[4814]: I0216 10:28:25.765667 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31264d4c-8e69-4f61-9e73-57ed148403a7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.068291 4814 generic.go:334] "Generic (PLEG): container finished" podID="31264d4c-8e69-4f61-9e73-57ed148403a7" containerID="ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e" exitCode=0 Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.068396 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v6zt6" Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.068396 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6zt6" event={"ID":"31264d4c-8e69-4f61-9e73-57ed148403a7","Type":"ContainerDied","Data":"ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e"} Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.068820 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v6zt6" event={"ID":"31264d4c-8e69-4f61-9e73-57ed148403a7","Type":"ContainerDied","Data":"540305c8d17eba5ff98bbed992a75bd47b3501e01bc86144c20fb8e85e80661b"} Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.068851 4814 scope.go:117] "RemoveContainer" containerID="ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e" Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.104835 4814 scope.go:117] "RemoveContainer" containerID="10b512bf027b148497c99e076d4d39ce5e3a7907f4891d089ee32f40e3346804" Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.120735 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v6zt6"] Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.130900 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v6zt6"] Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.133113 4814 scope.go:117] "RemoveContainer" containerID="71a8a2a9d3403a4f7f1574ebe77fc0da129357a16de82c2d2b4a221bf9e2525c" Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.196005 4814 scope.go:117] "RemoveContainer" containerID="ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e" Feb 16 10:28:26 crc kubenswrapper[4814]: E0216 10:28:26.196726 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e\": container with ID starting with ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e not found: ID does not exist" containerID="ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e" Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.196782 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e"} err="failed to get container status \"ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e\": rpc error: code = NotFound desc = could not find container \"ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e\": container with ID starting with ea5247d3ea4b53d54ac32ab0d74759d51a5720bf1470e77768cae7f68c92b52e not found: ID does not exist" Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.196819 4814 scope.go:117] "RemoveContainer" containerID="10b512bf027b148497c99e076d4d39ce5e3a7907f4891d089ee32f40e3346804" Feb 16 10:28:26 crc kubenswrapper[4814]: E0216 10:28:26.197433 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10b512bf027b148497c99e076d4d39ce5e3a7907f4891d089ee32f40e3346804\": container with ID starting with 10b512bf027b148497c99e076d4d39ce5e3a7907f4891d089ee32f40e3346804 not found: ID does not exist" containerID="10b512bf027b148497c99e076d4d39ce5e3a7907f4891d089ee32f40e3346804" Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.197503 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10b512bf027b148497c99e076d4d39ce5e3a7907f4891d089ee32f40e3346804"} err="failed to get container status \"10b512bf027b148497c99e076d4d39ce5e3a7907f4891d089ee32f40e3346804\": rpc error: code = NotFound desc = could not find container \"10b512bf027b148497c99e076d4d39ce5e3a7907f4891d089ee32f40e3346804\": container with ID starting with 10b512bf027b148497c99e076d4d39ce5e3a7907f4891d089ee32f40e3346804 not found: ID does not exist" Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.197575 4814 scope.go:117] "RemoveContainer" containerID="71a8a2a9d3403a4f7f1574ebe77fc0da129357a16de82c2d2b4a221bf9e2525c" Feb 16 10:28:26 crc kubenswrapper[4814]: E0216 10:28:26.197986 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71a8a2a9d3403a4f7f1574ebe77fc0da129357a16de82c2d2b4a221bf9e2525c\": container with ID starting with 71a8a2a9d3403a4f7f1574ebe77fc0da129357a16de82c2d2b4a221bf9e2525c not found: ID does not exist" containerID="71a8a2a9d3403a4f7f1574ebe77fc0da129357a16de82c2d2b4a221bf9e2525c" Feb 16 10:28:26 crc kubenswrapper[4814]: I0216 10:28:26.198028 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a8a2a9d3403a4f7f1574ebe77fc0da129357a16de82c2d2b4a221bf9e2525c"} err="failed to get container status \"71a8a2a9d3403a4f7f1574ebe77fc0da129357a16de82c2d2b4a221bf9e2525c\": rpc error: code = NotFound desc = could not find container \"71a8a2a9d3403a4f7f1574ebe77fc0da129357a16de82c2d2b4a221bf9e2525c\": container with ID starting with 71a8a2a9d3403a4f7f1574ebe77fc0da129357a16de82c2d2b4a221bf9e2525c not found: ID does not exist" Feb 16 10:28:27 crc kubenswrapper[4814]: I0216 10:28:27.007663 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31264d4c-8e69-4f61-9e73-57ed148403a7" path="/var/lib/kubelet/pods/31264d4c-8e69-4f61-9e73-57ed148403a7/volumes" Feb 16 10:28:38 crc kubenswrapper[4814]: I0216 10:28:38.002768 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:28:38 crc kubenswrapper[4814]: E0216 10:28:38.003917 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:28:38 crc kubenswrapper[4814]: I0216 10:28:38.994032 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:28:38 crc kubenswrapper[4814]: E0216 10:28:38.994310 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:28:50 crc kubenswrapper[4814]: I0216 10:28:50.994495 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:28:50 crc kubenswrapper[4814]: E0216 10:28:50.996080 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:28:51 crc kubenswrapper[4814]: I0216 10:28:51.994363 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:28:51 crc kubenswrapper[4814]: E0216 10:28:51.995151 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:29:03 crc kubenswrapper[4814]: I0216 10:29:03.994122 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:29:03 crc kubenswrapper[4814]: E0216 10:29:03.995578 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:29:06 crc kubenswrapper[4814]: I0216 10:29:06.994779 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:29:06 crc kubenswrapper[4814]: E0216 10:29:06.995722 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:29:14 crc kubenswrapper[4814]: I0216 10:29:14.994659 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:29:14 crc kubenswrapper[4814]: E0216 10:29:14.995847 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:29:18 crc kubenswrapper[4814]: I0216 10:29:18.994249 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:29:18 crc kubenswrapper[4814]: E0216 10:29:18.995523 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:29:29 crc kubenswrapper[4814]: I0216 10:29:29.993386 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:29:29 crc kubenswrapper[4814]: E0216 10:29:29.994328 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:29:31 crc kubenswrapper[4814]: I0216 10:29:31.995216 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:29:31 crc kubenswrapper[4814]: E0216 10:29:31.996158 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:29:43 crc kubenswrapper[4814]: I0216 10:29:43.993788 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:29:43 crc kubenswrapper[4814]: E0216 10:29:43.994901 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:29:43 crc kubenswrapper[4814]: I0216 10:29:43.995363 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:29:43 crc kubenswrapper[4814]: E0216 10:29:43.996520 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:29:56 crc kubenswrapper[4814]: I0216 10:29:56.940361 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zrk82"] Feb 16 10:29:56 crc kubenswrapper[4814]: E0216 10:29:56.941638 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31264d4c-8e69-4f61-9e73-57ed148403a7" containerName="registry-server" Feb 16 10:29:56 crc kubenswrapper[4814]: I0216 10:29:56.941659 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="31264d4c-8e69-4f61-9e73-57ed148403a7" containerName="registry-server" Feb 16 10:29:56 crc kubenswrapper[4814]: E0216 10:29:56.941675 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31264d4c-8e69-4f61-9e73-57ed148403a7" containerName="extract-content" Feb 16 10:29:56 crc kubenswrapper[4814]: I0216 10:29:56.941683 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="31264d4c-8e69-4f61-9e73-57ed148403a7" containerName="extract-content" Feb 16 10:29:56 crc kubenswrapper[4814]: E0216 10:29:56.941735 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31264d4c-8e69-4f61-9e73-57ed148403a7" containerName="extract-utilities" Feb 16 10:29:56 crc kubenswrapper[4814]: I0216 10:29:56.941745 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="31264d4c-8e69-4f61-9e73-57ed148403a7" containerName="extract-utilities" Feb 16 10:29:56 crc kubenswrapper[4814]: I0216 10:29:56.942255 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="31264d4c-8e69-4f61-9e73-57ed148403a7" containerName="registry-server" Feb 16 10:29:56 crc kubenswrapper[4814]: I0216 10:29:56.944393 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:29:56 crc kubenswrapper[4814]: I0216 10:29:56.991665 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrk82"] Feb 16 10:29:57 crc kubenswrapper[4814]: I0216 10:29:57.056005 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn2gb\" (UniqueName: \"kubernetes.io/projected/e682d242-965e-468f-b5e8-cb1bf76cfbcd-kube-api-access-zn2gb\") pod \"redhat-operators-zrk82\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:29:57 crc kubenswrapper[4814]: I0216 10:29:57.056388 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-utilities\") pod \"redhat-operators-zrk82\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:29:57 crc kubenswrapper[4814]: I0216 10:29:57.056652 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-catalog-content\") pod \"redhat-operators-zrk82\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:29:57 crc kubenswrapper[4814]: I0216 10:29:57.157570 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-utilities\") pod \"redhat-operators-zrk82\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:29:57 crc kubenswrapper[4814]: I0216 10:29:57.157987 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-catalog-content\") pod \"redhat-operators-zrk82\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:29:57 crc kubenswrapper[4814]: I0216 10:29:57.158265 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn2gb\" (UniqueName: \"kubernetes.io/projected/e682d242-965e-468f-b5e8-cb1bf76cfbcd-kube-api-access-zn2gb\") pod \"redhat-operators-zrk82\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:29:57 crc kubenswrapper[4814]: I0216 10:29:57.158329 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-utilities\") pod \"redhat-operators-zrk82\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:29:57 crc kubenswrapper[4814]: I0216 10:29:57.158639 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-catalog-content\") pod \"redhat-operators-zrk82\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:29:57 crc kubenswrapper[4814]: I0216 10:29:57.179513 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn2gb\" (UniqueName: \"kubernetes.io/projected/e682d242-965e-468f-b5e8-cb1bf76cfbcd-kube-api-access-zn2gb\") pod \"redhat-operators-zrk82\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:29:57 crc kubenswrapper[4814]: I0216 10:29:57.300157 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:29:57 crc kubenswrapper[4814]: I0216 10:29:57.815173 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrk82"] Feb 16 10:29:58 crc kubenswrapper[4814]: I0216 10:29:58.226375 4814 generic.go:334] "Generic (PLEG): container finished" podID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerID="970792d125d1c432e39ea6ea99d773a2cb11b5dd6b429bee21673cb47741190d" exitCode=0 Feb 16 10:29:58 crc kubenswrapper[4814]: I0216 10:29:58.226903 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrk82" event={"ID":"e682d242-965e-468f-b5e8-cb1bf76cfbcd","Type":"ContainerDied","Data":"970792d125d1c432e39ea6ea99d773a2cb11b5dd6b429bee21673cb47741190d"} Feb 16 10:29:58 crc kubenswrapper[4814]: I0216 10:29:58.226936 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrk82" event={"ID":"e682d242-965e-468f-b5e8-cb1bf76cfbcd","Type":"ContainerStarted","Data":"1bf23738702ba2cd45bff215caca724a41dc8f73e28b86bf280170cbb3e8ab32"} Feb 16 10:29:58 crc kubenswrapper[4814]: I0216 10:29:58.994750 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:29:58 crc kubenswrapper[4814]: I0216 10:29:58.995336 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:29:58 crc kubenswrapper[4814]: E0216 10:29:58.995760 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:29:58 crc kubenswrapper[4814]: E0216 10:29:58.995857 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:29:59 crc kubenswrapper[4814]: I0216 10:29:59.243499 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrk82" event={"ID":"e682d242-965e-468f-b5e8-cb1bf76cfbcd","Type":"ContainerStarted","Data":"36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab"} Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.158112 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv"] Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.160228 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.165692 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.167484 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.177231 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv"] Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.234158 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da1e3427-8e98-4cc8-ad68-5af087a8443f-config-volume\") pod \"collect-profiles-29520630-tqnxv\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.234328 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da1e3427-8e98-4cc8-ad68-5af087a8443f-secret-volume\") pod \"collect-profiles-29520630-tqnxv\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.234411 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz8w2\" (UniqueName: \"kubernetes.io/projected/da1e3427-8e98-4cc8-ad68-5af087a8443f-kube-api-access-pz8w2\") pod \"collect-profiles-29520630-tqnxv\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.252953 4814 generic.go:334] "Generic (PLEG): container finished" podID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerID="36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab" exitCode=0 Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.253011 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrk82" event={"ID":"e682d242-965e-468f-b5e8-cb1bf76cfbcd","Type":"ContainerDied","Data":"36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab"} Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.337897 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da1e3427-8e98-4cc8-ad68-5af087a8443f-secret-volume\") pod \"collect-profiles-29520630-tqnxv\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.338186 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz8w2\" (UniqueName: \"kubernetes.io/projected/da1e3427-8e98-4cc8-ad68-5af087a8443f-kube-api-access-pz8w2\") pod \"collect-profiles-29520630-tqnxv\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.338386 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da1e3427-8e98-4cc8-ad68-5af087a8443f-config-volume\") pod \"collect-profiles-29520630-tqnxv\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.339827 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da1e3427-8e98-4cc8-ad68-5af087a8443f-config-volume\") pod \"collect-profiles-29520630-tqnxv\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.346457 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da1e3427-8e98-4cc8-ad68-5af087a8443f-secret-volume\") pod \"collect-profiles-29520630-tqnxv\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.360065 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz8w2\" (UniqueName: \"kubernetes.io/projected/da1e3427-8e98-4cc8-ad68-5af087a8443f-kube-api-access-pz8w2\") pod \"collect-profiles-29520630-tqnxv\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:00 crc kubenswrapper[4814]: I0216 10:30:00.482764 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:01 crc kubenswrapper[4814]: I0216 10:30:01.011188 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv"] Feb 16 10:30:01 crc kubenswrapper[4814]: W0216 10:30:01.014527 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda1e3427_8e98_4cc8_ad68_5af087a8443f.slice/crio-19b7aec1752e9f8f373f852935e60ee987c81369e475753d68eb71283e596e08 WatchSource:0}: Error finding container 19b7aec1752e9f8f373f852935e60ee987c81369e475753d68eb71283e596e08: Status 404 returned error can't find the container with id 19b7aec1752e9f8f373f852935e60ee987c81369e475753d68eb71283e596e08 Feb 16 10:30:01 crc kubenswrapper[4814]: I0216 10:30:01.269418 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrk82" event={"ID":"e682d242-965e-468f-b5e8-cb1bf76cfbcd","Type":"ContainerStarted","Data":"046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae"} Feb 16 10:30:01 crc kubenswrapper[4814]: I0216 10:30:01.271291 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" event={"ID":"da1e3427-8e98-4cc8-ad68-5af087a8443f","Type":"ContainerStarted","Data":"b3254146dcb2df27af387e8fe0b16e2aabaca25fdcb8490361a756f18a6f83c9"} Feb 16 10:30:01 crc kubenswrapper[4814]: I0216 10:30:01.271338 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" event={"ID":"da1e3427-8e98-4cc8-ad68-5af087a8443f","Type":"ContainerStarted","Data":"19b7aec1752e9f8f373f852935e60ee987c81369e475753d68eb71283e596e08"} Feb 16 10:30:01 crc kubenswrapper[4814]: I0216 10:30:01.341064 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zrk82" podStartSLOduration=2.914810287 podStartE2EDuration="5.341032186s" podCreationTimestamp="2026-02-16 10:29:56 +0000 UTC" firstStartedPulling="2026-02-16 10:29:58.230223775 +0000 UTC m=+2655.923379975" lastFinishedPulling="2026-02-16 10:30:00.656445684 +0000 UTC m=+2658.349601874" observedRunningTime="2026-02-16 10:30:01.293696918 +0000 UTC m=+2658.986853098" watchObservedRunningTime="2026-02-16 10:30:01.341032186 +0000 UTC m=+2659.034188366" Feb 16 10:30:02 crc kubenswrapper[4814]: I0216 10:30:02.306238 4814 generic.go:334] "Generic (PLEG): container finished" podID="da1e3427-8e98-4cc8-ad68-5af087a8443f" containerID="b3254146dcb2df27af387e8fe0b16e2aabaca25fdcb8490361a756f18a6f83c9" exitCode=0 Feb 16 10:30:02 crc kubenswrapper[4814]: I0216 10:30:02.306811 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" event={"ID":"da1e3427-8e98-4cc8-ad68-5af087a8443f","Type":"ContainerDied","Data":"b3254146dcb2df27af387e8fe0b16e2aabaca25fdcb8490361a756f18a6f83c9"} Feb 16 10:30:03 crc kubenswrapper[4814]: I0216 10:30:03.675916 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:03 crc kubenswrapper[4814]: I0216 10:30:03.828938 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz8w2\" (UniqueName: \"kubernetes.io/projected/da1e3427-8e98-4cc8-ad68-5af087a8443f-kube-api-access-pz8w2\") pod \"da1e3427-8e98-4cc8-ad68-5af087a8443f\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " Feb 16 10:30:03 crc kubenswrapper[4814]: I0216 10:30:03.829107 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da1e3427-8e98-4cc8-ad68-5af087a8443f-config-volume\") pod \"da1e3427-8e98-4cc8-ad68-5af087a8443f\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " Feb 16 10:30:03 crc kubenswrapper[4814]: I0216 10:30:03.829131 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da1e3427-8e98-4cc8-ad68-5af087a8443f-secret-volume\") pod \"da1e3427-8e98-4cc8-ad68-5af087a8443f\" (UID: \"da1e3427-8e98-4cc8-ad68-5af087a8443f\") " Feb 16 10:30:03 crc kubenswrapper[4814]: I0216 10:30:03.830373 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da1e3427-8e98-4cc8-ad68-5af087a8443f-config-volume" (OuterVolumeSpecName: "config-volume") pod "da1e3427-8e98-4cc8-ad68-5af087a8443f" (UID: "da1e3427-8e98-4cc8-ad68-5af087a8443f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:30:03 crc kubenswrapper[4814]: I0216 10:30:03.838146 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da1e3427-8e98-4cc8-ad68-5af087a8443f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da1e3427-8e98-4cc8-ad68-5af087a8443f" (UID: "da1e3427-8e98-4cc8-ad68-5af087a8443f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:30:03 crc kubenswrapper[4814]: I0216 10:30:03.838157 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da1e3427-8e98-4cc8-ad68-5af087a8443f-kube-api-access-pz8w2" (OuterVolumeSpecName: "kube-api-access-pz8w2") pod "da1e3427-8e98-4cc8-ad68-5af087a8443f" (UID: "da1e3427-8e98-4cc8-ad68-5af087a8443f"). InnerVolumeSpecName "kube-api-access-pz8w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:30:03 crc kubenswrapper[4814]: I0216 10:30:03.931722 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz8w2\" (UniqueName: \"kubernetes.io/projected/da1e3427-8e98-4cc8-ad68-5af087a8443f-kube-api-access-pz8w2\") on node \"crc\" DevicePath \"\"" Feb 16 10:30:03 crc kubenswrapper[4814]: I0216 10:30:03.931801 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da1e3427-8e98-4cc8-ad68-5af087a8443f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 10:30:03 crc kubenswrapper[4814]: I0216 10:30:03.931817 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da1e3427-8e98-4cc8-ad68-5af087a8443f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 10:30:04 crc kubenswrapper[4814]: I0216 10:30:04.327795 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" Feb 16 10:30:04 crc kubenswrapper[4814]: I0216 10:30:04.327763 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv" event={"ID":"da1e3427-8e98-4cc8-ad68-5af087a8443f","Type":"ContainerDied","Data":"19b7aec1752e9f8f373f852935e60ee987c81369e475753d68eb71283e596e08"} Feb 16 10:30:04 crc kubenswrapper[4814]: I0216 10:30:04.328279 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b7aec1752e9f8f373f852935e60ee987c81369e475753d68eb71283e596e08" Feb 16 10:30:04 crc kubenswrapper[4814]: I0216 10:30:04.781138 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z"] Feb 16 10:30:04 crc kubenswrapper[4814]: I0216 10:30:04.796578 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520585-fl75z"] Feb 16 10:30:05 crc kubenswrapper[4814]: I0216 10:30:05.013518 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae79d44f-eef6-42b4-bd2b-50b9faece115" path="/var/lib/kubelet/pods/ae79d44f-eef6-42b4-bd2b-50b9faece115/volumes" Feb 16 10:30:07 crc kubenswrapper[4814]: I0216 10:30:07.301462 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:30:07 crc kubenswrapper[4814]: I0216 10:30:07.301957 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:30:08 crc kubenswrapper[4814]: I0216 10:30:08.381945 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zrk82" podUID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerName="registry-server" probeResult="failure" output=< Feb 16 10:30:08 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 10:30:08 crc kubenswrapper[4814]: > Feb 16 10:30:13 crc kubenswrapper[4814]: I0216 10:30:13.013162 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:30:13 crc kubenswrapper[4814]: I0216 10:30:13.015036 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:30:13 crc kubenswrapper[4814]: E0216 10:30:13.015672 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:30:14 crc kubenswrapper[4814]: I0216 10:30:14.445516 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e"} Feb 16 10:30:17 crc kubenswrapper[4814]: I0216 10:30:17.379523 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:30:17 crc kubenswrapper[4814]: I0216 10:30:17.454504 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:30:17 crc kubenswrapper[4814]: I0216 10:30:17.487878 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" exitCode=0 Feb 16 10:30:17 crc kubenswrapper[4814]: I0216 10:30:17.487953 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e"} Feb 16 10:30:17 crc kubenswrapper[4814]: I0216 10:30:17.488027 4814 scope.go:117] "RemoveContainer" containerID="20f3634861e6f92272f5647db98ea8babc4bd75c11ac1fcd9f3cb88ba27c0502" Feb 16 10:30:17 crc kubenswrapper[4814]: I0216 10:30:17.489310 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:30:17 crc kubenswrapper[4814]: E0216 10:30:17.489857 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:30:17 crc kubenswrapper[4814]: I0216 10:30:17.630930 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrk82"] Feb 16 10:30:17 crc kubenswrapper[4814]: I0216 10:30:17.676987 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:30:17 crc kubenswrapper[4814]: I0216 10:30:17.677055 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:30:18 crc kubenswrapper[4814]: I0216 10:30:18.519571 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zrk82" podUID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerName="registry-server" containerID="cri-o://046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae" gracePeriod=2 Feb 16 10:30:18 crc kubenswrapper[4814]: I0216 10:30:18.520617 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:30:18 crc kubenswrapper[4814]: E0216 10:30:18.521117 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:30:18 crc kubenswrapper[4814]: I0216 10:30:18.709156 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.015308 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.072268 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-catalog-content\") pod \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.072759 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-utilities\") pod \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.072901 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn2gb\" (UniqueName: \"kubernetes.io/projected/e682d242-965e-468f-b5e8-cb1bf76cfbcd-kube-api-access-zn2gb\") pod \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\" (UID: \"e682d242-965e-468f-b5e8-cb1bf76cfbcd\") " Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.074967 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-utilities" (OuterVolumeSpecName: "utilities") pod "e682d242-965e-468f-b5e8-cb1bf76cfbcd" (UID: "e682d242-965e-468f-b5e8-cb1bf76cfbcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.080962 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e682d242-965e-468f-b5e8-cb1bf76cfbcd-kube-api-access-zn2gb" (OuterVolumeSpecName: "kube-api-access-zn2gb") pod "e682d242-965e-468f-b5e8-cb1bf76cfbcd" (UID: "e682d242-965e-468f-b5e8-cb1bf76cfbcd"). InnerVolumeSpecName "kube-api-access-zn2gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.175390 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.175427 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn2gb\" (UniqueName: \"kubernetes.io/projected/e682d242-965e-468f-b5e8-cb1bf76cfbcd-kube-api-access-zn2gb\") on node \"crc\" DevicePath \"\"" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.199364 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e682d242-965e-468f-b5e8-cb1bf76cfbcd" (UID: "e682d242-965e-468f-b5e8-cb1bf76cfbcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.278025 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e682d242-965e-468f-b5e8-cb1bf76cfbcd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.536967 4814 generic.go:334] "Generic (PLEG): container finished" podID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerID="046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae" exitCode=0 Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.537069 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrk82" event={"ID":"e682d242-965e-468f-b5e8-cb1bf76cfbcd","Type":"ContainerDied","Data":"046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae"} Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.537195 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrk82" event={"ID":"e682d242-965e-468f-b5e8-cb1bf76cfbcd","Type":"ContainerDied","Data":"1bf23738702ba2cd45bff215caca724a41dc8f73e28b86bf280170cbb3e8ab32"} Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.537236 4814 scope.go:117] "RemoveContainer" containerID="046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.537649 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrk82" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.538199 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:30:19 crc kubenswrapper[4814]: E0216 10:30:19.538772 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.580171 4814 scope.go:117] "RemoveContainer" containerID="36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.587347 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrk82"] Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.598884 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zrk82"] Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.615054 4814 scope.go:117] "RemoveContainer" containerID="970792d125d1c432e39ea6ea99d773a2cb11b5dd6b429bee21673cb47741190d" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.682513 4814 scope.go:117] "RemoveContainer" containerID="046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae" Feb 16 10:30:19 crc kubenswrapper[4814]: E0216 10:30:19.683053 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae\": container with ID starting with 046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae not found: ID does not exist" containerID="046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.683104 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae"} err="failed to get container status \"046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae\": rpc error: code = NotFound desc = could not find container \"046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae\": container with ID starting with 046facd8867d5607817de83b350ace37ef3b060a8262ee6efc959a11f84af8ae not found: ID does not exist" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.683139 4814 scope.go:117] "RemoveContainer" containerID="36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab" Feb 16 10:30:19 crc kubenswrapper[4814]: E0216 10:30:19.683772 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab\": container with ID starting with 36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab not found: ID does not exist" containerID="36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.683823 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab"} err="failed to get container status \"36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab\": rpc error: code = NotFound desc = could not find container \"36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab\": container with ID starting with 36b9e1b1d38a3c9ad48e41d36c931daaeaa5ca14dd45da84c421319918c0faab not found: ID does not exist" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.683861 4814 scope.go:117] "RemoveContainer" containerID="970792d125d1c432e39ea6ea99d773a2cb11b5dd6b429bee21673cb47741190d" Feb 16 10:30:19 crc kubenswrapper[4814]: E0216 10:30:19.684245 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"970792d125d1c432e39ea6ea99d773a2cb11b5dd6b429bee21673cb47741190d\": container with ID starting with 970792d125d1c432e39ea6ea99d773a2cb11b5dd6b429bee21673cb47741190d not found: ID does not exist" containerID="970792d125d1c432e39ea6ea99d773a2cb11b5dd6b429bee21673cb47741190d" Feb 16 10:30:19 crc kubenswrapper[4814]: I0216 10:30:19.684285 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970792d125d1c432e39ea6ea99d773a2cb11b5dd6b429bee21673cb47741190d"} err="failed to get container status \"970792d125d1c432e39ea6ea99d773a2cb11b5dd6b429bee21673cb47741190d\": rpc error: code = NotFound desc = could not find container \"970792d125d1c432e39ea6ea99d773a2cb11b5dd6b429bee21673cb47741190d\": container with ID starting with 970792d125d1c432e39ea6ea99d773a2cb11b5dd6b429bee21673cb47741190d not found: ID does not exist" Feb 16 10:30:21 crc kubenswrapper[4814]: I0216 10:30:21.008847 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" path="/var/lib/kubelet/pods/e682d242-965e-468f-b5e8-cb1bf76cfbcd/volumes" Feb 16 10:30:27 crc kubenswrapper[4814]: I0216 10:30:27.994206 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:30:27 crc kubenswrapper[4814]: E0216 10:30:27.995050 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:30:30 crc kubenswrapper[4814]: I0216 10:30:30.994111 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:30:30 crc kubenswrapper[4814]: E0216 10:30:30.995047 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:30:40 crc kubenswrapper[4814]: I0216 10:30:40.014792 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:30:40 crc kubenswrapper[4814]: I0216 10:30:40.822727 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"f241583f7d62974ed97a866ff71898f5c2c744e4436f3f46b28bbc5b211a36ed"} Feb 16 10:30:44 crc kubenswrapper[4814]: I0216 10:30:44.995288 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:30:44 crc kubenswrapper[4814]: E0216 10:30:44.996682 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:30:52 crc kubenswrapper[4814]: I0216 10:30:52.191993 4814 scope.go:117] "RemoveContainer" containerID="ad6034050e134cc4faffa4c5cde1d6dd8ea79a3b8d5f1be70c99e9989ad7b634" Feb 16 10:30:52 crc kubenswrapper[4814]: I0216 10:30:52.252682 4814 scope.go:117] "RemoveContainer" containerID="c5853d9c67709fd4bcf12a47aea42d5072e84c6d4deec317dfb9bb5257b4a162" Feb 16 10:30:52 crc kubenswrapper[4814]: I0216 10:30:52.344668 4814 scope.go:117] "RemoveContainer" containerID="b70cc66088d3481eb75731a179396288a61c45d4a5ab165557663357494c1949" Feb 16 10:30:52 crc kubenswrapper[4814]: I0216 10:30:52.388614 4814 scope.go:117] "RemoveContainer" containerID="f539bb60153e4cc0aa4ab601ce544b3c74b9c189f99189625c463bb7155486a7" Feb 16 10:30:55 crc kubenswrapper[4814]: I0216 10:30:55.994609 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:30:55 crc kubenswrapper[4814]: E0216 10:30:55.996072 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:31:07 crc kubenswrapper[4814]: I0216 10:31:07.993605 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:31:07 crc kubenswrapper[4814]: E0216 10:31:07.994896 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:31:19 crc kubenswrapper[4814]: I0216 10:31:19.994199 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:31:19 crc kubenswrapper[4814]: E0216 10:31:19.995240 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:31:30 crc kubenswrapper[4814]: I0216 10:31:30.996452 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:31:30 crc kubenswrapper[4814]: E0216 10:31:30.997883 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:31:44 crc kubenswrapper[4814]: I0216 10:31:44.994209 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:31:44 crc kubenswrapper[4814]: E0216 10:31:44.995678 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:31:56 crc kubenswrapper[4814]: I0216 10:31:56.994912 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:31:56 crc kubenswrapper[4814]: E0216 10:31:56.998037 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:32:11 crc kubenswrapper[4814]: I0216 10:32:11.994416 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:32:11 crc kubenswrapper[4814]: E0216 10:32:11.995487 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:32:25 crc kubenswrapper[4814]: I0216 10:32:25.994314 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:32:25 crc kubenswrapper[4814]: E0216 10:32:25.996015 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:32:39 crc kubenswrapper[4814]: I0216 10:32:39.995253 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:32:39 crc kubenswrapper[4814]: E0216 10:32:39.996985 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:32:54 crc kubenswrapper[4814]: I0216 10:32:54.995520 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:32:54 crc kubenswrapper[4814]: E0216 10:32:54.996793 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:33:06 crc kubenswrapper[4814]: I0216 10:33:06.994160 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:33:06 crc kubenswrapper[4814]: E0216 10:33:06.995467 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:33:07 crc kubenswrapper[4814]: I0216 10:33:07.960290 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:33:07 crc kubenswrapper[4814]: I0216 10:33:07.960888 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:33:21 crc kubenswrapper[4814]: I0216 10:33:21.994481 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:33:21 crc kubenswrapper[4814]: E0216 10:33:21.995615 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.512019 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wmqqw"] Feb 16 10:33:31 crc kubenswrapper[4814]: E0216 10:33:31.513306 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da1e3427-8e98-4cc8-ad68-5af087a8443f" containerName="collect-profiles" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.513324 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="da1e3427-8e98-4cc8-ad68-5af087a8443f" containerName="collect-profiles" Feb 16 10:33:31 crc kubenswrapper[4814]: E0216 10:33:31.513352 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerName="extract-content" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.513365 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerName="extract-content" Feb 16 10:33:31 crc kubenswrapper[4814]: E0216 10:33:31.513385 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerName="extract-utilities" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.513395 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerName="extract-utilities" Feb 16 10:33:31 crc kubenswrapper[4814]: E0216 10:33:31.513412 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerName="registry-server" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.513420 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerName="registry-server" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.513694 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e682d242-965e-468f-b5e8-cb1bf76cfbcd" containerName="registry-server" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.513743 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="da1e3427-8e98-4cc8-ad68-5af087a8443f" containerName="collect-profiles" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.515567 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.524808 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x6l6\" (UniqueName: \"kubernetes.io/projected/ccddb69e-f060-44cb-9144-385292a74fbb-kube-api-access-2x6l6\") pod \"community-operators-wmqqw\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.524872 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-catalog-content\") pod \"community-operators-wmqqw\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.525116 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-utilities\") pod \"community-operators-wmqqw\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.552160 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmqqw"] Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.627689 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-utilities\") pod \"community-operators-wmqqw\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.627784 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x6l6\" (UniqueName: \"kubernetes.io/projected/ccddb69e-f060-44cb-9144-385292a74fbb-kube-api-access-2x6l6\") pod \"community-operators-wmqqw\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.627820 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-catalog-content\") pod \"community-operators-wmqqw\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.628444 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-catalog-content\") pod \"community-operators-wmqqw\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.628456 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-utilities\") pod \"community-operators-wmqqw\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.659372 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x6l6\" (UniqueName: \"kubernetes.io/projected/ccddb69e-f060-44cb-9144-385292a74fbb-kube-api-access-2x6l6\") pod \"community-operators-wmqqw\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:31 crc kubenswrapper[4814]: I0216 10:33:31.874396 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:32 crc kubenswrapper[4814]: I0216 10:33:32.473855 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wmqqw"] Feb 16 10:33:32 crc kubenswrapper[4814]: I0216 10:33:32.873783 4814 generic.go:334] "Generic (PLEG): container finished" podID="ccddb69e-f060-44cb-9144-385292a74fbb" containerID="6512ebd9a007144629dfd1dd1741f3e4005bcb28ecdf54ac5aa26874ad079a1c" exitCode=0 Feb 16 10:33:32 crc kubenswrapper[4814]: I0216 10:33:32.873877 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmqqw" event={"ID":"ccddb69e-f060-44cb-9144-385292a74fbb","Type":"ContainerDied","Data":"6512ebd9a007144629dfd1dd1741f3e4005bcb28ecdf54ac5aa26874ad079a1c"} Feb 16 10:33:32 crc kubenswrapper[4814]: I0216 10:33:32.874382 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmqqw" event={"ID":"ccddb69e-f060-44cb-9144-385292a74fbb","Type":"ContainerStarted","Data":"314cc2459abd5b16695c78caac128b392ce04011a470123048f8a986d7e3ed40"} Feb 16 10:33:32 crc kubenswrapper[4814]: I0216 10:33:32.877096 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 10:33:33 crc kubenswrapper[4814]: I0216 10:33:33.003852 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:33:33 crc kubenswrapper[4814]: E0216 10:33:33.004335 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:33:33 crc kubenswrapper[4814]: I0216 10:33:33.889328 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmqqw" event={"ID":"ccddb69e-f060-44cb-9144-385292a74fbb","Type":"ContainerStarted","Data":"2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847"} Feb 16 10:33:34 crc kubenswrapper[4814]: I0216 10:33:34.904300 4814 generic.go:334] "Generic (PLEG): container finished" podID="ccddb69e-f060-44cb-9144-385292a74fbb" containerID="2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847" exitCode=0 Feb 16 10:33:34 crc kubenswrapper[4814]: I0216 10:33:34.904359 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmqqw" event={"ID":"ccddb69e-f060-44cb-9144-385292a74fbb","Type":"ContainerDied","Data":"2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847"} Feb 16 10:33:35 crc kubenswrapper[4814]: I0216 10:33:35.920528 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmqqw" event={"ID":"ccddb69e-f060-44cb-9144-385292a74fbb","Type":"ContainerStarted","Data":"29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce"} Feb 16 10:33:35 crc kubenswrapper[4814]: I0216 10:33:35.956729 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wmqqw" podStartSLOduration=2.512487316 podStartE2EDuration="4.956704481s" podCreationTimestamp="2026-02-16 10:33:31 +0000 UTC" firstStartedPulling="2026-02-16 10:33:32.87673673 +0000 UTC m=+2870.569892910" lastFinishedPulling="2026-02-16 10:33:35.320953885 +0000 UTC m=+2873.014110075" observedRunningTime="2026-02-16 10:33:35.947190313 +0000 UTC m=+2873.640346503" watchObservedRunningTime="2026-02-16 10:33:35.956704481 +0000 UTC m=+2873.649860671" Feb 16 10:33:37 crc kubenswrapper[4814]: I0216 10:33:37.959840 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:33:37 crc kubenswrapper[4814]: I0216 10:33:37.960425 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:33:41 crc kubenswrapper[4814]: I0216 10:33:41.874760 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:41 crc kubenswrapper[4814]: I0216 10:33:41.875738 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:41 crc kubenswrapper[4814]: I0216 10:33:41.940603 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:42 crc kubenswrapper[4814]: I0216 10:33:42.060359 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:42 crc kubenswrapper[4814]: I0216 10:33:42.238940 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmqqw"] Feb 16 10:33:44 crc kubenswrapper[4814]: I0216 10:33:44.040985 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wmqqw" podUID="ccddb69e-f060-44cb-9144-385292a74fbb" containerName="registry-server" containerID="cri-o://29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce" gracePeriod=2 Feb 16 10:33:44 crc kubenswrapper[4814]: I0216 10:33:44.524383 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:44 crc kubenswrapper[4814]: I0216 10:33:44.685293 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x6l6\" (UniqueName: \"kubernetes.io/projected/ccddb69e-f060-44cb-9144-385292a74fbb-kube-api-access-2x6l6\") pod \"ccddb69e-f060-44cb-9144-385292a74fbb\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " Feb 16 10:33:44 crc kubenswrapper[4814]: I0216 10:33:44.685352 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-catalog-content\") pod \"ccddb69e-f060-44cb-9144-385292a74fbb\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " Feb 16 10:33:44 crc kubenswrapper[4814]: I0216 10:33:44.685658 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-utilities\") pod \"ccddb69e-f060-44cb-9144-385292a74fbb\" (UID: \"ccddb69e-f060-44cb-9144-385292a74fbb\") " Feb 16 10:33:44 crc kubenswrapper[4814]: I0216 10:33:44.686793 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-utilities" (OuterVolumeSpecName: "utilities") pod "ccddb69e-f060-44cb-9144-385292a74fbb" (UID: "ccddb69e-f060-44cb-9144-385292a74fbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:33:44 crc kubenswrapper[4814]: I0216 10:33:44.693628 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccddb69e-f060-44cb-9144-385292a74fbb-kube-api-access-2x6l6" (OuterVolumeSpecName: "kube-api-access-2x6l6") pod "ccddb69e-f060-44cb-9144-385292a74fbb" (UID: "ccddb69e-f060-44cb-9144-385292a74fbb"). InnerVolumeSpecName "kube-api-access-2x6l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:33:44 crc kubenswrapper[4814]: I0216 10:33:44.756498 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccddb69e-f060-44cb-9144-385292a74fbb" (UID: "ccddb69e-f060-44cb-9144-385292a74fbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:33:44 crc kubenswrapper[4814]: I0216 10:33:44.788573 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x6l6\" (UniqueName: \"kubernetes.io/projected/ccddb69e-f060-44cb-9144-385292a74fbb-kube-api-access-2x6l6\") on node \"crc\" DevicePath \"\"" Feb 16 10:33:44 crc kubenswrapper[4814]: I0216 10:33:44.788615 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:33:44 crc kubenswrapper[4814]: I0216 10:33:44.788626 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccddb69e-f060-44cb-9144-385292a74fbb-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.058145 4814 generic.go:334] "Generic (PLEG): container finished" podID="ccddb69e-f060-44cb-9144-385292a74fbb" containerID="29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce" exitCode=0 Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.058218 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmqqw" event={"ID":"ccddb69e-f060-44cb-9144-385292a74fbb","Type":"ContainerDied","Data":"29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce"} Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.058268 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wmqqw" event={"ID":"ccddb69e-f060-44cb-9144-385292a74fbb","Type":"ContainerDied","Data":"314cc2459abd5b16695c78caac128b392ce04011a470123048f8a986d7e3ed40"} Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.058289 4814 scope.go:117] "RemoveContainer" containerID="29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce" Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.058488 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wmqqw" Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.091120 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wmqqw"] Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.093279 4814 scope.go:117] "RemoveContainer" containerID="2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847" Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.100657 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wmqqw"] Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.126128 4814 scope.go:117] "RemoveContainer" containerID="6512ebd9a007144629dfd1dd1741f3e4005bcb28ecdf54ac5aa26874ad079a1c" Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.172395 4814 scope.go:117] "RemoveContainer" containerID="29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce" Feb 16 10:33:45 crc kubenswrapper[4814]: E0216 10:33:45.174228 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce\": container with ID starting with 29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce not found: ID does not exist" containerID="29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce" Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.174287 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce"} err="failed to get container status \"29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce\": rpc error: code = NotFound desc = could not find container \"29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce\": container with ID starting with 29240cb43abbcb340d2efacb029351f094e20c6c10637df3c454b37c1eb6f7ce not found: ID does not exist" Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.174347 4814 scope.go:117] "RemoveContainer" containerID="2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847" Feb 16 10:33:45 crc kubenswrapper[4814]: E0216 10:33:45.175016 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847\": container with ID starting with 2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847 not found: ID does not exist" containerID="2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847" Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.175087 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847"} err="failed to get container status \"2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847\": rpc error: code = NotFound desc = could not find container \"2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847\": container with ID starting with 2c016bd3595d6e68974fdd18f93098fe47081c9e365cb5cb8a5f60006bc5a847 not found: ID does not exist" Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.175137 4814 scope.go:117] "RemoveContainer" containerID="6512ebd9a007144629dfd1dd1741f3e4005bcb28ecdf54ac5aa26874ad079a1c" Feb 16 10:33:45 crc kubenswrapper[4814]: E0216 10:33:45.175527 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6512ebd9a007144629dfd1dd1741f3e4005bcb28ecdf54ac5aa26874ad079a1c\": container with ID starting with 6512ebd9a007144629dfd1dd1741f3e4005bcb28ecdf54ac5aa26874ad079a1c not found: ID does not exist" containerID="6512ebd9a007144629dfd1dd1741f3e4005bcb28ecdf54ac5aa26874ad079a1c" Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.175565 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6512ebd9a007144629dfd1dd1741f3e4005bcb28ecdf54ac5aa26874ad079a1c"} err="failed to get container status \"6512ebd9a007144629dfd1dd1741f3e4005bcb28ecdf54ac5aa26874ad079a1c\": rpc error: code = NotFound desc = could not find container \"6512ebd9a007144629dfd1dd1741f3e4005bcb28ecdf54ac5aa26874ad079a1c\": container with ID starting with 6512ebd9a007144629dfd1dd1741f3e4005bcb28ecdf54ac5aa26874ad079a1c not found: ID does not exist" Feb 16 10:33:45 crc kubenswrapper[4814]: I0216 10:33:45.994823 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:33:45 crc kubenswrapper[4814]: E0216 10:33:45.995268 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:33:47 crc kubenswrapper[4814]: I0216 10:33:47.014859 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccddb69e-f060-44cb-9144-385292a74fbb" path="/var/lib/kubelet/pods/ccddb69e-f060-44cb-9144-385292a74fbb/volumes" Feb 16 10:34:00 crc kubenswrapper[4814]: I0216 10:34:00.995413 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:34:00 crc kubenswrapper[4814]: E0216 10:34:00.997056 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:34:07 crc kubenswrapper[4814]: I0216 10:34:07.960662 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:34:07 crc kubenswrapper[4814]: I0216 10:34:07.961692 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:34:07 crc kubenswrapper[4814]: I0216 10:34:07.961778 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 10:34:07 crc kubenswrapper[4814]: I0216 10:34:07.963176 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f241583f7d62974ed97a866ff71898f5c2c744e4436f3f46b28bbc5b211a36ed"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 10:34:07 crc kubenswrapper[4814]: I0216 10:34:07.963290 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://f241583f7d62974ed97a866ff71898f5c2c744e4436f3f46b28bbc5b211a36ed" gracePeriod=600 Feb 16 10:34:08 crc kubenswrapper[4814]: I0216 10:34:08.350750 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="f241583f7d62974ed97a866ff71898f5c2c744e4436f3f46b28bbc5b211a36ed" exitCode=0 Feb 16 10:34:08 crc kubenswrapper[4814]: I0216 10:34:08.350830 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"f241583f7d62974ed97a866ff71898f5c2c744e4436f3f46b28bbc5b211a36ed"} Feb 16 10:34:08 crc kubenswrapper[4814]: I0216 10:34:08.351599 4814 scope.go:117] "RemoveContainer" containerID="c86a89be5202a99b071ae43755d8a64549d04eb59001a8cd93f9cd33c015fba5" Feb 16 10:34:09 crc kubenswrapper[4814]: I0216 10:34:09.364632 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7"} Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.403654 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmzc"] Feb 16 10:34:11 crc kubenswrapper[4814]: E0216 10:34:11.405286 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccddb69e-f060-44cb-9144-385292a74fbb" containerName="extract-content" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.405318 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccddb69e-f060-44cb-9144-385292a74fbb" containerName="extract-content" Feb 16 10:34:11 crc kubenswrapper[4814]: E0216 10:34:11.405379 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccddb69e-f060-44cb-9144-385292a74fbb" containerName="registry-server" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.405392 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccddb69e-f060-44cb-9144-385292a74fbb" containerName="registry-server" Feb 16 10:34:11 crc kubenswrapper[4814]: E0216 10:34:11.405467 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccddb69e-f060-44cb-9144-385292a74fbb" containerName="extract-utilities" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.405482 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccddb69e-f060-44cb-9144-385292a74fbb" containerName="extract-utilities" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.405848 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccddb69e-f060-44cb-9144-385292a74fbb" containerName="registry-server" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.419125 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.435485 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmzc"] Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.525249 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk8hj\" (UniqueName: \"kubernetes.io/projected/3cc35456-00f7-41c0-80b1-d7613fb5c66b-kube-api-access-pk8hj\") pod \"redhat-marketplace-pfmzc\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.525353 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-utilities\") pod \"redhat-marketplace-pfmzc\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.525625 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-catalog-content\") pod \"redhat-marketplace-pfmzc\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.628649 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-utilities\") pod \"redhat-marketplace-pfmzc\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.628755 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-catalog-content\") pod \"redhat-marketplace-pfmzc\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.628906 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk8hj\" (UniqueName: \"kubernetes.io/projected/3cc35456-00f7-41c0-80b1-d7613fb5c66b-kube-api-access-pk8hj\") pod \"redhat-marketplace-pfmzc\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.629367 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-utilities\") pod \"redhat-marketplace-pfmzc\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.629459 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-catalog-content\") pod \"redhat-marketplace-pfmzc\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.654329 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk8hj\" (UniqueName: \"kubernetes.io/projected/3cc35456-00f7-41c0-80b1-d7613fb5c66b-kube-api-access-pk8hj\") pod \"redhat-marketplace-pfmzc\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:11 crc kubenswrapper[4814]: I0216 10:34:11.758666 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:12 crc kubenswrapper[4814]: I0216 10:34:12.301521 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmzc"] Feb 16 10:34:12 crc kubenswrapper[4814]: I0216 10:34:12.415512 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmzc" event={"ID":"3cc35456-00f7-41c0-80b1-d7613fb5c66b","Type":"ContainerStarted","Data":"282dac807bc81434f0820874377551c4a562cff9dbab6faefcf57d4e0b88de96"} Feb 16 10:34:13 crc kubenswrapper[4814]: I0216 10:34:13.430073 4814 generic.go:334] "Generic (PLEG): container finished" podID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" containerID="804b99a06e501d705d498906cd83c62110459231da1790a8f6a5f1231de56080" exitCode=0 Feb 16 10:34:13 crc kubenswrapper[4814]: I0216 10:34:13.430178 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmzc" event={"ID":"3cc35456-00f7-41c0-80b1-d7613fb5c66b","Type":"ContainerDied","Data":"804b99a06e501d705d498906cd83c62110459231da1790a8f6a5f1231de56080"} Feb 16 10:34:13 crc kubenswrapper[4814]: I0216 10:34:13.994147 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:34:13 crc kubenswrapper[4814]: E0216 10:34:13.994968 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:34:14 crc kubenswrapper[4814]: I0216 10:34:14.445269 4814 generic.go:334] "Generic (PLEG): container finished" podID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" containerID="f94b1e1f045d7f15ba7bc31597a4c26e36da8457c5ae1dc8e12e1caca508b25c" exitCode=0 Feb 16 10:34:14 crc kubenswrapper[4814]: I0216 10:34:14.445329 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmzc" event={"ID":"3cc35456-00f7-41c0-80b1-d7613fb5c66b","Type":"ContainerDied","Data":"f94b1e1f045d7f15ba7bc31597a4c26e36da8457c5ae1dc8e12e1caca508b25c"} Feb 16 10:34:15 crc kubenswrapper[4814]: I0216 10:34:15.464656 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmzc" event={"ID":"3cc35456-00f7-41c0-80b1-d7613fb5c66b","Type":"ContainerStarted","Data":"8d2bc9d0bfe647632e8661d36a543c2f3db6c0a57ac4df93bd209552e43e477f"} Feb 16 10:34:15 crc kubenswrapper[4814]: I0216 10:34:15.490251 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pfmzc" podStartSLOduration=3.074132171 podStartE2EDuration="4.490228732s" podCreationTimestamp="2026-02-16 10:34:11 +0000 UTC" firstStartedPulling="2026-02-16 10:34:13.433807631 +0000 UTC m=+2911.126963831" lastFinishedPulling="2026-02-16 10:34:14.849904212 +0000 UTC m=+2912.543060392" observedRunningTime="2026-02-16 10:34:15.482996976 +0000 UTC m=+2913.176153176" watchObservedRunningTime="2026-02-16 10:34:15.490228732 +0000 UTC m=+2913.183384912" Feb 16 10:34:21 crc kubenswrapper[4814]: I0216 10:34:21.760242 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:21 crc kubenswrapper[4814]: I0216 10:34:21.761390 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:21 crc kubenswrapper[4814]: I0216 10:34:21.842891 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:22 crc kubenswrapper[4814]: I0216 10:34:22.627148 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.114818 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmzc"] Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.116326 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pfmzc" podUID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" containerName="registry-server" containerID="cri-o://8d2bc9d0bfe647632e8661d36a543c2f3db6c0a57ac4df93bd209552e43e477f" gracePeriod=2 Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.605231 4814 generic.go:334] "Generic (PLEG): container finished" podID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" containerID="8d2bc9d0bfe647632e8661d36a543c2f3db6c0a57ac4df93bd209552e43e477f" exitCode=0 Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.605623 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmzc" event={"ID":"3cc35456-00f7-41c0-80b1-d7613fb5c66b","Type":"ContainerDied","Data":"8d2bc9d0bfe647632e8661d36a543c2f3db6c0a57ac4df93bd209552e43e477f"} Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.605650 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfmzc" event={"ID":"3cc35456-00f7-41c0-80b1-d7613fb5c66b","Type":"ContainerDied","Data":"282dac807bc81434f0820874377551c4a562cff9dbab6faefcf57d4e0b88de96"} Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.605660 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="282dac807bc81434f0820874377551c4a562cff9dbab6faefcf57d4e0b88de96" Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.676836 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.746834 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-catalog-content\") pod \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.747122 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk8hj\" (UniqueName: \"kubernetes.io/projected/3cc35456-00f7-41c0-80b1-d7613fb5c66b-kube-api-access-pk8hj\") pod \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.747280 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-utilities\") pod \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\" (UID: \"3cc35456-00f7-41c0-80b1-d7613fb5c66b\") " Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.748623 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-utilities" (OuterVolumeSpecName: "utilities") pod "3cc35456-00f7-41c0-80b1-d7613fb5c66b" (UID: "3cc35456-00f7-41c0-80b1-d7613fb5c66b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.758048 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc35456-00f7-41c0-80b1-d7613fb5c66b-kube-api-access-pk8hj" (OuterVolumeSpecName: "kube-api-access-pk8hj") pod "3cc35456-00f7-41c0-80b1-d7613fb5c66b" (UID: "3cc35456-00f7-41c0-80b1-d7613fb5c66b"). InnerVolumeSpecName "kube-api-access-pk8hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.772104 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cc35456-00f7-41c0-80b1-d7613fb5c66b" (UID: "3cc35456-00f7-41c0-80b1-d7613fb5c66b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.849962 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.850333 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc35456-00f7-41c0-80b1-d7613fb5c66b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:34:26 crc kubenswrapper[4814]: I0216 10:34:26.850351 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk8hj\" (UniqueName: \"kubernetes.io/projected/3cc35456-00f7-41c0-80b1-d7613fb5c66b-kube-api-access-pk8hj\") on node \"crc\" DevicePath \"\"" Feb 16 10:34:27 crc kubenswrapper[4814]: I0216 10:34:27.615429 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfmzc" Feb 16 10:34:27 crc kubenswrapper[4814]: I0216 10:34:27.650710 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmzc"] Feb 16 10:34:27 crc kubenswrapper[4814]: I0216 10:34:27.662324 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfmzc"] Feb 16 10:34:28 crc kubenswrapper[4814]: I0216 10:34:28.994397 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:34:28 crc kubenswrapper[4814]: E0216 10:34:28.995051 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:34:29 crc kubenswrapper[4814]: I0216 10:34:29.008967 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" path="/var/lib/kubelet/pods/3cc35456-00f7-41c0-80b1-d7613fb5c66b/volumes" Feb 16 10:34:40 crc kubenswrapper[4814]: I0216 10:34:40.995515 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:34:40 crc kubenswrapper[4814]: E0216 10:34:40.997085 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:34:54 crc kubenswrapper[4814]: I0216 10:34:54.994474 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:34:54 crc kubenswrapper[4814]: E0216 10:34:54.996218 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:35:09 crc kubenswrapper[4814]: I0216 10:35:09.995169 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:35:09 crc kubenswrapper[4814]: E0216 10:35:09.996872 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:35:23 crc kubenswrapper[4814]: I0216 10:35:23.002171 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:35:24 crc kubenswrapper[4814]: I0216 10:35:24.288387 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74"} Feb 16 10:35:27 crc kubenswrapper[4814]: I0216 10:35:27.334686 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" exitCode=0 Feb 16 10:35:27 crc kubenswrapper[4814]: I0216 10:35:27.334787 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74"} Feb 16 10:35:27 crc kubenswrapper[4814]: I0216 10:35:27.335649 4814 scope.go:117] "RemoveContainer" containerID="540d718fb4a7fb6d8caf9e68b8da5db8c09bd76ab2d9eb810ca9211560ff783e" Feb 16 10:35:27 crc kubenswrapper[4814]: I0216 10:35:27.336638 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:35:27 crc kubenswrapper[4814]: E0216 10:35:27.337258 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:35:27 crc kubenswrapper[4814]: I0216 10:35:27.677504 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:35:27 crc kubenswrapper[4814]: I0216 10:35:27.677606 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:35:27 crc kubenswrapper[4814]: I0216 10:35:27.677620 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:35:28 crc kubenswrapper[4814]: I0216 10:35:28.351778 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:35:28 crc kubenswrapper[4814]: E0216 10:35:28.352147 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:35:39 crc kubenswrapper[4814]: I0216 10:35:39.994639 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:35:39 crc kubenswrapper[4814]: E0216 10:35:39.996133 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:35:53 crc kubenswrapper[4814]: I0216 10:35:53.007088 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:35:53 crc kubenswrapper[4814]: E0216 10:35:53.008742 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:36:07 crc kubenswrapper[4814]: I0216 10:36:07.995113 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:36:07 crc kubenswrapper[4814]: E0216 10:36:07.997431 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:36:23 crc kubenswrapper[4814]: I0216 10:36:23.000026 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:36:23 crc kubenswrapper[4814]: E0216 10:36:23.005054 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:36:35 crc kubenswrapper[4814]: I0216 10:36:35.994608 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:36:35 crc kubenswrapper[4814]: E0216 10:36:35.996437 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:36:37 crc kubenswrapper[4814]: I0216 10:36:37.960802 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:36:37 crc kubenswrapper[4814]: I0216 10:36:37.961353 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:36:50 crc kubenswrapper[4814]: I0216 10:36:50.993939 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:36:50 crc kubenswrapper[4814]: E0216 10:36:50.996796 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:37:04 crc kubenswrapper[4814]: I0216 10:37:03.994229 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:37:04 crc kubenswrapper[4814]: E0216 10:37:04.010278 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:37:07 crc kubenswrapper[4814]: I0216 10:37:07.960134 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:37:07 crc kubenswrapper[4814]: I0216 10:37:07.961018 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:37:14 crc kubenswrapper[4814]: I0216 10:37:14.995064 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:37:14 crc kubenswrapper[4814]: E0216 10:37:14.996096 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:37:25 crc kubenswrapper[4814]: I0216 10:37:25.995298 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:37:25 crc kubenswrapper[4814]: E0216 10:37:25.996732 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:37:37 crc kubenswrapper[4814]: I0216 10:37:37.959792 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:37:37 crc kubenswrapper[4814]: I0216 10:37:37.960362 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:37:37 crc kubenswrapper[4814]: I0216 10:37:37.960430 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 10:37:37 crc kubenswrapper[4814]: I0216 10:37:37.961449 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 10:37:37 crc kubenswrapper[4814]: I0216 10:37:37.961516 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" gracePeriod=600 Feb 16 10:37:38 crc kubenswrapper[4814]: E0216 10:37:38.089352 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:37:38 crc kubenswrapper[4814]: I0216 10:37:38.914285 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" exitCode=0 Feb 16 10:37:38 crc kubenswrapper[4814]: I0216 10:37:38.914586 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7"} Feb 16 10:37:38 crc kubenswrapper[4814]: I0216 10:37:38.914952 4814 scope.go:117] "RemoveContainer" containerID="f241583f7d62974ed97a866ff71898f5c2c744e4436f3f46b28bbc5b211a36ed" Feb 16 10:37:38 crc kubenswrapper[4814]: I0216 10:37:38.916498 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:37:38 crc kubenswrapper[4814]: E0216 10:37:38.917530 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:37:39 crc kubenswrapper[4814]: I0216 10:37:39.022506 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:37:39 crc kubenswrapper[4814]: E0216 10:37:39.022963 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:37:51 crc kubenswrapper[4814]: I0216 10:37:51.994610 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:37:51 crc kubenswrapper[4814]: E0216 10:37:51.995755 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:37:53 crc kubenswrapper[4814]: I0216 10:37:53.006640 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:37:53 crc kubenswrapper[4814]: E0216 10:37:53.007956 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:38:03 crc kubenswrapper[4814]: I0216 10:38:03.001124 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:38:03 crc kubenswrapper[4814]: E0216 10:38:03.002138 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:38:03 crc kubenswrapper[4814]: I0216 10:38:03.993999 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:38:03 crc kubenswrapper[4814]: E0216 10:38:03.994803 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:38:14 crc kubenswrapper[4814]: I0216 10:38:14.994263 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:38:14 crc kubenswrapper[4814]: I0216 10:38:14.995118 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:38:14 crc kubenswrapper[4814]: E0216 10:38:14.995320 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:38:14 crc kubenswrapper[4814]: E0216 10:38:14.995502 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:38:28 crc kubenswrapper[4814]: I0216 10:38:28.994363 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:38:28 crc kubenswrapper[4814]: I0216 10:38:28.995391 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:38:28 crc kubenswrapper[4814]: E0216 10:38:28.995624 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:38:28 crc kubenswrapper[4814]: E0216 10:38:28.995724 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:38:40 crc kubenswrapper[4814]: I0216 10:38:40.994342 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:38:40 crc kubenswrapper[4814]: E0216 10:38:40.995598 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:38:43 crc kubenswrapper[4814]: I0216 10:38:43.994404 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:38:43 crc kubenswrapper[4814]: E0216 10:38:43.995215 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:38:51 crc kubenswrapper[4814]: I0216 10:38:51.993899 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:38:51 crc kubenswrapper[4814]: E0216 10:38:51.995068 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:38:56 crc kubenswrapper[4814]: I0216 10:38:56.994598 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:38:56 crc kubenswrapper[4814]: E0216 10:38:56.995674 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:39:03 crc kubenswrapper[4814]: I0216 10:39:03.005995 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:39:03 crc kubenswrapper[4814]: E0216 10:39:03.007715 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:39:08 crc kubenswrapper[4814]: I0216 10:39:08.994420 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:39:08 crc kubenswrapper[4814]: E0216 10:39:08.995771 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:39:15 crc kubenswrapper[4814]: I0216 10:39:15.994258 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:39:15 crc kubenswrapper[4814]: E0216 10:39:15.995357 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:39:21 crc kubenswrapper[4814]: I0216 10:39:21.009242 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:39:21 crc kubenswrapper[4814]: E0216 10:39:21.011240 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:39:29 crc kubenswrapper[4814]: I0216 10:39:29.995600 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:39:29 crc kubenswrapper[4814]: E0216 10:39:29.997002 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:39:34 crc kubenswrapper[4814]: I0216 10:39:34.995046 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:39:34 crc kubenswrapper[4814]: E0216 10:39:34.997018 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:39:40 crc kubenswrapper[4814]: I0216 10:39:40.993771 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:39:40 crc kubenswrapper[4814]: E0216 10:39:40.994493 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:39:45 crc kubenswrapper[4814]: I0216 10:39:45.994628 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:39:45 crc kubenswrapper[4814]: E0216 10:39:45.995776 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:39:48 crc kubenswrapper[4814]: I0216 10:39:48.822992 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zgfh2"] Feb 16 10:39:48 crc kubenswrapper[4814]: E0216 10:39:48.865919 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" containerName="extract-utilities" Feb 16 10:39:48 crc kubenswrapper[4814]: I0216 10:39:48.865987 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" containerName="extract-utilities" Feb 16 10:39:48 crc kubenswrapper[4814]: E0216 10:39:48.866078 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" containerName="registry-server" Feb 16 10:39:48 crc kubenswrapper[4814]: I0216 10:39:48.866094 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" containerName="registry-server" Feb 16 10:39:48 crc kubenswrapper[4814]: E0216 10:39:48.866187 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" containerName="extract-content" Feb 16 10:39:48 crc kubenswrapper[4814]: I0216 10:39:48.866211 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" containerName="extract-content" Feb 16 10:39:48 crc kubenswrapper[4814]: I0216 10:39:48.870158 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc35456-00f7-41c0-80b1-d7613fb5c66b" containerName="registry-server" Feb 16 10:39:48 crc kubenswrapper[4814]: I0216 10:39:48.901777 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zgfh2"] Feb 16 10:39:48 crc kubenswrapper[4814]: I0216 10:39:48.901999 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:49 crc kubenswrapper[4814]: I0216 10:39:49.090845 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-catalog-content\") pod \"certified-operators-zgfh2\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:49 crc kubenswrapper[4814]: I0216 10:39:49.091421 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-utilities\") pod \"certified-operators-zgfh2\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:49 crc kubenswrapper[4814]: I0216 10:39:49.091491 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p689z\" (UniqueName: \"kubernetes.io/projected/348392cc-011d-4296-8e6e-9640479a42cb-kube-api-access-p689z\") pod \"certified-operators-zgfh2\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:49 crc kubenswrapper[4814]: I0216 10:39:49.194517 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-catalog-content\") pod \"certified-operators-zgfh2\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:49 crc kubenswrapper[4814]: I0216 10:39:49.194612 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-utilities\") pod \"certified-operators-zgfh2\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:49 crc kubenswrapper[4814]: I0216 10:39:49.194639 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p689z\" (UniqueName: \"kubernetes.io/projected/348392cc-011d-4296-8e6e-9640479a42cb-kube-api-access-p689z\") pod \"certified-operators-zgfh2\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:49 crc kubenswrapper[4814]: I0216 10:39:49.195345 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-utilities\") pod \"certified-operators-zgfh2\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:49 crc kubenswrapper[4814]: I0216 10:39:49.195532 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-catalog-content\") pod \"certified-operators-zgfh2\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:49 crc kubenswrapper[4814]: I0216 10:39:49.233912 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p689z\" (UniqueName: \"kubernetes.io/projected/348392cc-011d-4296-8e6e-9640479a42cb-kube-api-access-p689z\") pod \"certified-operators-zgfh2\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:49 crc kubenswrapper[4814]: I0216 10:39:49.235169 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:49 crc kubenswrapper[4814]: I0216 10:39:49.793169 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zgfh2"] Feb 16 10:39:50 crc kubenswrapper[4814]: I0216 10:39:50.490865 4814 generic.go:334] "Generic (PLEG): container finished" podID="348392cc-011d-4296-8e6e-9640479a42cb" containerID="e94486afd052d8942900f4751d100919f64eb307884ddae6b41da4f52608d155" exitCode=0 Feb 16 10:39:50 crc kubenswrapper[4814]: I0216 10:39:50.490916 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgfh2" event={"ID":"348392cc-011d-4296-8e6e-9640479a42cb","Type":"ContainerDied","Data":"e94486afd052d8942900f4751d100919f64eb307884ddae6b41da4f52608d155"} Feb 16 10:39:50 crc kubenswrapper[4814]: I0216 10:39:50.490946 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgfh2" event={"ID":"348392cc-011d-4296-8e6e-9640479a42cb","Type":"ContainerStarted","Data":"6743c970667de69cbf5575a6af375ec1443cfa373b9c1403b610aa056d36147c"} Feb 16 10:39:50 crc kubenswrapper[4814]: I0216 10:39:50.493765 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 10:39:52 crc kubenswrapper[4814]: I0216 10:39:52.513757 4814 generic.go:334] "Generic (PLEG): container finished" podID="348392cc-011d-4296-8e6e-9640479a42cb" containerID="2caae62b622cc72759d62b7fe109ec11ebe6f8218645b676dc330a169c202541" exitCode=0 Feb 16 10:39:52 crc kubenswrapper[4814]: I0216 10:39:52.513846 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgfh2" event={"ID":"348392cc-011d-4296-8e6e-9640479a42cb","Type":"ContainerDied","Data":"2caae62b622cc72759d62b7fe109ec11ebe6f8218645b676dc330a169c202541"} Feb 16 10:39:53 crc kubenswrapper[4814]: I0216 10:39:53.531525 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgfh2" event={"ID":"348392cc-011d-4296-8e6e-9640479a42cb","Type":"ContainerStarted","Data":"a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab"} Feb 16 10:39:53 crc kubenswrapper[4814]: I0216 10:39:53.568937 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zgfh2" podStartSLOduration=3.084498193 podStartE2EDuration="5.568915207s" podCreationTimestamp="2026-02-16 10:39:48 +0000 UTC" firstStartedPulling="2026-02-16 10:39:50.493558222 +0000 UTC m=+3248.186714402" lastFinishedPulling="2026-02-16 10:39:52.977975236 +0000 UTC m=+3250.671131416" observedRunningTime="2026-02-16 10:39:53.566276865 +0000 UTC m=+3251.259433055" watchObservedRunningTime="2026-02-16 10:39:53.568915207 +0000 UTC m=+3251.262071407" Feb 16 10:39:55 crc kubenswrapper[4814]: I0216 10:39:55.993710 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:39:55 crc kubenswrapper[4814]: E0216 10:39:55.994522 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:39:59 crc kubenswrapper[4814]: I0216 10:39:59.235785 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:59 crc kubenswrapper[4814]: I0216 10:39:59.236352 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:59 crc kubenswrapper[4814]: I0216 10:39:59.332037 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:59 crc kubenswrapper[4814]: I0216 10:39:59.666981 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:39:59 crc kubenswrapper[4814]: I0216 10:39:59.732163 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zgfh2"] Feb 16 10:39:59 crc kubenswrapper[4814]: I0216 10:39:59.994178 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:39:59 crc kubenswrapper[4814]: E0216 10:39:59.994602 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:40:01 crc kubenswrapper[4814]: I0216 10:40:01.627264 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zgfh2" podUID="348392cc-011d-4296-8e6e-9640479a42cb" containerName="registry-server" containerID="cri-o://a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab" gracePeriod=2 Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.120838 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.230368 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p689z\" (UniqueName: \"kubernetes.io/projected/348392cc-011d-4296-8e6e-9640479a42cb-kube-api-access-p689z\") pod \"348392cc-011d-4296-8e6e-9640479a42cb\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.230465 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-utilities\") pod \"348392cc-011d-4296-8e6e-9640479a42cb\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.230496 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-catalog-content\") pod \"348392cc-011d-4296-8e6e-9640479a42cb\" (UID: \"348392cc-011d-4296-8e6e-9640479a42cb\") " Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.231323 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-utilities" (OuterVolumeSpecName: "utilities") pod "348392cc-011d-4296-8e6e-9640479a42cb" (UID: "348392cc-011d-4296-8e6e-9640479a42cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.238996 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348392cc-011d-4296-8e6e-9640479a42cb-kube-api-access-p689z" (OuterVolumeSpecName: "kube-api-access-p689z") pod "348392cc-011d-4296-8e6e-9640479a42cb" (UID: "348392cc-011d-4296-8e6e-9640479a42cb"). InnerVolumeSpecName "kube-api-access-p689z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.294037 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "348392cc-011d-4296-8e6e-9640479a42cb" (UID: "348392cc-011d-4296-8e6e-9640479a42cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.332951 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p689z\" (UniqueName: \"kubernetes.io/projected/348392cc-011d-4296-8e6e-9640479a42cb-kube-api-access-p689z\") on node \"crc\" DevicePath \"\"" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.333005 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.333021 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/348392cc-011d-4296-8e6e-9640479a42cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.639780 4814 generic.go:334] "Generic (PLEG): container finished" podID="348392cc-011d-4296-8e6e-9640479a42cb" containerID="a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab" exitCode=0 Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.639826 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgfh2" event={"ID":"348392cc-011d-4296-8e6e-9640479a42cb","Type":"ContainerDied","Data":"a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab"} Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.639856 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zgfh2" event={"ID":"348392cc-011d-4296-8e6e-9640479a42cb","Type":"ContainerDied","Data":"6743c970667de69cbf5575a6af375ec1443cfa373b9c1403b610aa056d36147c"} Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.639865 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zgfh2" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.639874 4814 scope.go:117] "RemoveContainer" containerID="a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.671202 4814 scope.go:117] "RemoveContainer" containerID="2caae62b622cc72759d62b7fe109ec11ebe6f8218645b676dc330a169c202541" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.698835 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zgfh2"] Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.698892 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zgfh2"] Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.760833 4814 scope.go:117] "RemoveContainer" containerID="e94486afd052d8942900f4751d100919f64eb307884ddae6b41da4f52608d155" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.783726 4814 scope.go:117] "RemoveContainer" containerID="a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab" Feb 16 10:40:02 crc kubenswrapper[4814]: E0216 10:40:02.784537 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab\": container with ID starting with a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab not found: ID does not exist" containerID="a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.784626 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab"} err="failed to get container status \"a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab\": rpc error: code = NotFound desc = could not find container \"a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab\": container with ID starting with a880676fd5c5050aaccf4a602884fda7a1649d989ebba69e523b925659a42bab not found: ID does not exist" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.784658 4814 scope.go:117] "RemoveContainer" containerID="2caae62b622cc72759d62b7fe109ec11ebe6f8218645b676dc330a169c202541" Feb 16 10:40:02 crc kubenswrapper[4814]: E0216 10:40:02.788883 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2caae62b622cc72759d62b7fe109ec11ebe6f8218645b676dc330a169c202541\": container with ID starting with 2caae62b622cc72759d62b7fe109ec11ebe6f8218645b676dc330a169c202541 not found: ID does not exist" containerID="2caae62b622cc72759d62b7fe109ec11ebe6f8218645b676dc330a169c202541" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.788915 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2caae62b622cc72759d62b7fe109ec11ebe6f8218645b676dc330a169c202541"} err="failed to get container status \"2caae62b622cc72759d62b7fe109ec11ebe6f8218645b676dc330a169c202541\": rpc error: code = NotFound desc = could not find container \"2caae62b622cc72759d62b7fe109ec11ebe6f8218645b676dc330a169c202541\": container with ID starting with 2caae62b622cc72759d62b7fe109ec11ebe6f8218645b676dc330a169c202541 not found: ID does not exist" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.788937 4814 scope.go:117] "RemoveContainer" containerID="e94486afd052d8942900f4751d100919f64eb307884ddae6b41da4f52608d155" Feb 16 10:40:02 crc kubenswrapper[4814]: E0216 10:40:02.789194 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e94486afd052d8942900f4751d100919f64eb307884ddae6b41da4f52608d155\": container with ID starting with e94486afd052d8942900f4751d100919f64eb307884ddae6b41da4f52608d155 not found: ID does not exist" containerID="e94486afd052d8942900f4751d100919f64eb307884ddae6b41da4f52608d155" Feb 16 10:40:02 crc kubenswrapper[4814]: I0216 10:40:02.789221 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94486afd052d8942900f4751d100919f64eb307884ddae6b41da4f52608d155"} err="failed to get container status \"e94486afd052d8942900f4751d100919f64eb307884ddae6b41da4f52608d155\": rpc error: code = NotFound desc = could not find container \"e94486afd052d8942900f4751d100919f64eb307884ddae6b41da4f52608d155\": container with ID starting with e94486afd052d8942900f4751d100919f64eb307884ddae6b41da4f52608d155 not found: ID does not exist" Feb 16 10:40:03 crc kubenswrapper[4814]: I0216 10:40:03.016129 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="348392cc-011d-4296-8e6e-9640479a42cb" path="/var/lib/kubelet/pods/348392cc-011d-4296-8e6e-9640479a42cb/volumes" Feb 16 10:40:08 crc kubenswrapper[4814]: I0216 10:40:08.993818 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:40:08 crc kubenswrapper[4814]: E0216 10:40:08.994904 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:40:13 crc kubenswrapper[4814]: I0216 10:40:13.994958 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:40:13 crc kubenswrapper[4814]: E0216 10:40:13.996741 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.058621 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-76z5t"] Feb 16 10:40:19 crc kubenswrapper[4814]: E0216 10:40:19.060257 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348392cc-011d-4296-8e6e-9640479a42cb" containerName="extract-utilities" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.060282 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="348392cc-011d-4296-8e6e-9640479a42cb" containerName="extract-utilities" Feb 16 10:40:19 crc kubenswrapper[4814]: E0216 10:40:19.060379 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348392cc-011d-4296-8e6e-9640479a42cb" containerName="registry-server" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.060392 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="348392cc-011d-4296-8e6e-9640479a42cb" containerName="registry-server" Feb 16 10:40:19 crc kubenswrapper[4814]: E0216 10:40:19.060419 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348392cc-011d-4296-8e6e-9640479a42cb" containerName="extract-content" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.060433 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="348392cc-011d-4296-8e6e-9640479a42cb" containerName="extract-content" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.060847 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="348392cc-011d-4296-8e6e-9640479a42cb" containerName="registry-server" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.064616 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.068817 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76z5t"] Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.198509 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5bs\" (UniqueName: \"kubernetes.io/projected/4586c020-a841-4f7c-83d7-f4d06254119c-kube-api-access-6c5bs\") pod \"redhat-operators-76z5t\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.198758 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-utilities\") pod \"redhat-operators-76z5t\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.198983 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-catalog-content\") pod \"redhat-operators-76z5t\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.301292 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-catalog-content\") pod \"redhat-operators-76z5t\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.301524 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c5bs\" (UniqueName: \"kubernetes.io/projected/4586c020-a841-4f7c-83d7-f4d06254119c-kube-api-access-6c5bs\") pod \"redhat-operators-76z5t\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.301648 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-utilities\") pod \"redhat-operators-76z5t\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.302460 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-utilities\") pod \"redhat-operators-76z5t\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.303083 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-catalog-content\") pod \"redhat-operators-76z5t\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.328509 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c5bs\" (UniqueName: \"kubernetes.io/projected/4586c020-a841-4f7c-83d7-f4d06254119c-kube-api-access-6c5bs\") pod \"redhat-operators-76z5t\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.399875 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:19 crc kubenswrapper[4814]: I0216 10:40:19.937591 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-76z5t"] Feb 16 10:40:20 crc kubenswrapper[4814]: I0216 10:40:20.859071 4814 generic.go:334] "Generic (PLEG): container finished" podID="4586c020-a841-4f7c-83d7-f4d06254119c" containerID="4ed3f3e8ada4072febd28277240da8a23ce298a2dd8f9dab1a7c36058f34cf16" exitCode=0 Feb 16 10:40:20 crc kubenswrapper[4814]: I0216 10:40:20.859201 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76z5t" event={"ID":"4586c020-a841-4f7c-83d7-f4d06254119c","Type":"ContainerDied","Data":"4ed3f3e8ada4072febd28277240da8a23ce298a2dd8f9dab1a7c36058f34cf16"} Feb 16 10:40:20 crc kubenswrapper[4814]: I0216 10:40:20.860703 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76z5t" event={"ID":"4586c020-a841-4f7c-83d7-f4d06254119c","Type":"ContainerStarted","Data":"616aa7c36c687703a8f583360044d74477633737194ec1095ac922596f647e11"} Feb 16 10:40:21 crc kubenswrapper[4814]: I0216 10:40:21.872647 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76z5t" event={"ID":"4586c020-a841-4f7c-83d7-f4d06254119c","Type":"ContainerStarted","Data":"b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580"} Feb 16 10:40:22 crc kubenswrapper[4814]: I0216 10:40:22.888852 4814 generic.go:334] "Generic (PLEG): container finished" podID="4586c020-a841-4f7c-83d7-f4d06254119c" containerID="b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580" exitCode=0 Feb 16 10:40:22 crc kubenswrapper[4814]: I0216 10:40:22.888973 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76z5t" event={"ID":"4586c020-a841-4f7c-83d7-f4d06254119c","Type":"ContainerDied","Data":"b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580"} Feb 16 10:40:23 crc kubenswrapper[4814]: I0216 10:40:23.905430 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76z5t" event={"ID":"4586c020-a841-4f7c-83d7-f4d06254119c","Type":"ContainerStarted","Data":"7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9"} Feb 16 10:40:23 crc kubenswrapper[4814]: I0216 10:40:23.951251 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-76z5t" podStartSLOduration=2.517848944 podStartE2EDuration="4.951226394s" podCreationTimestamp="2026-02-16 10:40:19 +0000 UTC" firstStartedPulling="2026-02-16 10:40:20.862726393 +0000 UTC m=+3278.555882593" lastFinishedPulling="2026-02-16 10:40:23.296103863 +0000 UTC m=+3280.989260043" observedRunningTime="2026-02-16 10:40:23.937390018 +0000 UTC m=+3281.630546218" watchObservedRunningTime="2026-02-16 10:40:23.951226394 +0000 UTC m=+3281.644382584" Feb 16 10:40:23 crc kubenswrapper[4814]: I0216 10:40:23.994063 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:40:23 crc kubenswrapper[4814]: E0216 10:40:23.994390 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:40:27 crc kubenswrapper[4814]: I0216 10:40:27.994287 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:40:28 crc kubenswrapper[4814]: I0216 10:40:28.963127 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb"} Feb 16 10:40:29 crc kubenswrapper[4814]: I0216 10:40:29.401149 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:29 crc kubenswrapper[4814]: I0216 10:40:29.401254 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:30 crc kubenswrapper[4814]: I0216 10:40:30.485767 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-76z5t" podUID="4586c020-a841-4f7c-83d7-f4d06254119c" containerName="registry-server" probeResult="failure" output=< Feb 16 10:40:30 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 10:40:30 crc kubenswrapper[4814]: > Feb 16 10:40:32 crc kubenswrapper[4814]: I0216 10:40:32.010857 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" exitCode=0 Feb 16 10:40:32 crc kubenswrapper[4814]: I0216 10:40:32.010915 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb"} Feb 16 10:40:32 crc kubenswrapper[4814]: I0216 10:40:32.011441 4814 scope.go:117] "RemoveContainer" containerID="4642ea08c3b229102961752d2137b702dfa5b3f391929847daae4d12d7c9ae74" Feb 16 10:40:32 crc kubenswrapper[4814]: I0216 10:40:32.012022 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:40:32 crc kubenswrapper[4814]: E0216 10:40:32.012457 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:40:32 crc kubenswrapper[4814]: I0216 10:40:32.677389 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:40:32 crc kubenswrapper[4814]: I0216 10:40:32.677494 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:40:33 crc kubenswrapper[4814]: I0216 10:40:33.040041 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:40:33 crc kubenswrapper[4814]: E0216 10:40:33.042290 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:40:33 crc kubenswrapper[4814]: I0216 10:40:33.677021 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:40:34 crc kubenswrapper[4814]: I0216 10:40:34.053119 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:40:34 crc kubenswrapper[4814]: E0216 10:40:34.053759 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:40:35 crc kubenswrapper[4814]: I0216 10:40:35.994190 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:40:35 crc kubenswrapper[4814]: E0216 10:40:35.994900 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:40:39 crc kubenswrapper[4814]: I0216 10:40:39.457709 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:39 crc kubenswrapper[4814]: I0216 10:40:39.526191 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:39 crc kubenswrapper[4814]: I0216 10:40:39.721774 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-76z5t"] Feb 16 10:40:41 crc kubenswrapper[4814]: I0216 10:40:41.144054 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-76z5t" podUID="4586c020-a841-4f7c-83d7-f4d06254119c" containerName="registry-server" containerID="cri-o://7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9" gracePeriod=2 Feb 16 10:40:41 crc kubenswrapper[4814]: I0216 10:40:41.706414 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:41 crc kubenswrapper[4814]: I0216 10:40:41.812893 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-catalog-content\") pod \"4586c020-a841-4f7c-83d7-f4d06254119c\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " Feb 16 10:40:41 crc kubenswrapper[4814]: I0216 10:40:41.813058 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-utilities\") pod \"4586c020-a841-4f7c-83d7-f4d06254119c\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " Feb 16 10:40:41 crc kubenswrapper[4814]: I0216 10:40:41.813352 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c5bs\" (UniqueName: \"kubernetes.io/projected/4586c020-a841-4f7c-83d7-f4d06254119c-kube-api-access-6c5bs\") pod \"4586c020-a841-4f7c-83d7-f4d06254119c\" (UID: \"4586c020-a841-4f7c-83d7-f4d06254119c\") " Feb 16 10:40:41 crc kubenswrapper[4814]: I0216 10:40:41.814286 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-utilities" (OuterVolumeSpecName: "utilities") pod "4586c020-a841-4f7c-83d7-f4d06254119c" (UID: "4586c020-a841-4f7c-83d7-f4d06254119c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:40:41 crc kubenswrapper[4814]: I0216 10:40:41.821864 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4586c020-a841-4f7c-83d7-f4d06254119c-kube-api-access-6c5bs" (OuterVolumeSpecName: "kube-api-access-6c5bs") pod "4586c020-a841-4f7c-83d7-f4d06254119c" (UID: "4586c020-a841-4f7c-83d7-f4d06254119c"). InnerVolumeSpecName "kube-api-access-6c5bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:40:41 crc kubenswrapper[4814]: I0216 10:40:41.916857 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c5bs\" (UniqueName: \"kubernetes.io/projected/4586c020-a841-4f7c-83d7-f4d06254119c-kube-api-access-6c5bs\") on node \"crc\" DevicePath \"\"" Feb 16 10:40:41 crc kubenswrapper[4814]: I0216 10:40:41.916892 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:40:41 crc kubenswrapper[4814]: I0216 10:40:41.987246 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4586c020-a841-4f7c-83d7-f4d06254119c" (UID: "4586c020-a841-4f7c-83d7-f4d06254119c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.019609 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4586c020-a841-4f7c-83d7-f4d06254119c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.166723 4814 generic.go:334] "Generic (PLEG): container finished" podID="4586c020-a841-4f7c-83d7-f4d06254119c" containerID="7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9" exitCode=0 Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.166871 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76z5t" event={"ID":"4586c020-a841-4f7c-83d7-f4d06254119c","Type":"ContainerDied","Data":"7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9"} Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.166984 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-76z5t" event={"ID":"4586c020-a841-4f7c-83d7-f4d06254119c","Type":"ContainerDied","Data":"616aa7c36c687703a8f583360044d74477633737194ec1095ac922596f647e11"} Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.167015 4814 scope.go:117] "RemoveContainer" containerID="7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9" Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.167043 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-76z5t" Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.196188 4814 scope.go:117] "RemoveContainer" containerID="b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580" Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.224267 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-76z5t"] Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.235797 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-76z5t"] Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.258600 4814 scope.go:117] "RemoveContainer" containerID="4ed3f3e8ada4072febd28277240da8a23ce298a2dd8f9dab1a7c36058f34cf16" Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.294289 4814 scope.go:117] "RemoveContainer" containerID="7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9" Feb 16 10:40:42 crc kubenswrapper[4814]: E0216 10:40:42.294951 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9\": container with ID starting with 7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9 not found: ID does not exist" containerID="7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9" Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.294994 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9"} err="failed to get container status \"7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9\": rpc error: code = NotFound desc = could not find container \"7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9\": container with ID starting with 7ef06509d07504e0d63d69b097b0bce8d1be767f394cd1523f59cbac61c68ed9 not found: ID does not exist" Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.295027 4814 scope.go:117] "RemoveContainer" containerID="b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580" Feb 16 10:40:42 crc kubenswrapper[4814]: E0216 10:40:42.297237 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580\": container with ID starting with b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580 not found: ID does not exist" containerID="b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580" Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.297288 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580"} err="failed to get container status \"b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580\": rpc error: code = NotFound desc = could not find container \"b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580\": container with ID starting with b4b0610be82ace94a7a9dd5ea68fa8a625f06466d4ccc9ced665d0dfe31a5580 not found: ID does not exist" Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.297321 4814 scope.go:117] "RemoveContainer" containerID="4ed3f3e8ada4072febd28277240da8a23ce298a2dd8f9dab1a7c36058f34cf16" Feb 16 10:40:42 crc kubenswrapper[4814]: E0216 10:40:42.297936 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed3f3e8ada4072febd28277240da8a23ce298a2dd8f9dab1a7c36058f34cf16\": container with ID starting with 4ed3f3e8ada4072febd28277240da8a23ce298a2dd8f9dab1a7c36058f34cf16 not found: ID does not exist" containerID="4ed3f3e8ada4072febd28277240da8a23ce298a2dd8f9dab1a7c36058f34cf16" Feb 16 10:40:42 crc kubenswrapper[4814]: I0216 10:40:42.298017 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed3f3e8ada4072febd28277240da8a23ce298a2dd8f9dab1a7c36058f34cf16"} err="failed to get container status \"4ed3f3e8ada4072febd28277240da8a23ce298a2dd8f9dab1a7c36058f34cf16\": rpc error: code = NotFound desc = could not find container \"4ed3f3e8ada4072febd28277240da8a23ce298a2dd8f9dab1a7c36058f34cf16\": container with ID starting with 4ed3f3e8ada4072febd28277240da8a23ce298a2dd8f9dab1a7c36058f34cf16 not found: ID does not exist" Feb 16 10:40:43 crc kubenswrapper[4814]: I0216 10:40:43.015796 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4586c020-a841-4f7c-83d7-f4d06254119c" path="/var/lib/kubelet/pods/4586c020-a841-4f7c-83d7-f4d06254119c/volumes" Feb 16 10:40:47 crc kubenswrapper[4814]: I0216 10:40:47.994388 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:40:47 crc kubenswrapper[4814]: E0216 10:40:47.996426 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:40:48 crc kubenswrapper[4814]: I0216 10:40:48.993467 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:40:48 crc kubenswrapper[4814]: E0216 10:40:48.994088 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:40:52 crc kubenswrapper[4814]: I0216 10:40:52.808023 4814 scope.go:117] "RemoveContainer" containerID="8d2bc9d0bfe647632e8661d36a543c2f3db6c0a57ac4df93bd209552e43e477f" Feb 16 10:40:52 crc kubenswrapper[4814]: I0216 10:40:52.846175 4814 scope.go:117] "RemoveContainer" containerID="804b99a06e501d705d498906cd83c62110459231da1790a8f6a5f1231de56080" Feb 16 10:40:52 crc kubenswrapper[4814]: I0216 10:40:52.875331 4814 scope.go:117] "RemoveContainer" containerID="f94b1e1f045d7f15ba7bc31597a4c26e36da8457c5ae1dc8e12e1caca508b25c" Feb 16 10:41:01 crc kubenswrapper[4814]: I0216 10:41:01.994187 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:41:01 crc kubenswrapper[4814]: E0216 10:41:01.995420 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:41:01 crc kubenswrapper[4814]: I0216 10:41:01.995658 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:41:01 crc kubenswrapper[4814]: E0216 10:41:01.996064 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:41:14 crc kubenswrapper[4814]: I0216 10:41:14.994487 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:41:14 crc kubenswrapper[4814]: E0216 10:41:14.995857 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:41:16 crc kubenswrapper[4814]: I0216 10:41:16.998369 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:41:16 crc kubenswrapper[4814]: E0216 10:41:16.998764 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:41:28 crc kubenswrapper[4814]: I0216 10:41:28.994355 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:41:28 crc kubenswrapper[4814]: E0216 10:41:28.996106 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:41:29 crc kubenswrapper[4814]: I0216 10:41:29.994368 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:41:29 crc kubenswrapper[4814]: E0216 10:41:29.995165 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:41:40 crc kubenswrapper[4814]: I0216 10:41:40.994623 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:41:40 crc kubenswrapper[4814]: E0216 10:41:40.996615 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:41:44 crc kubenswrapper[4814]: I0216 10:41:44.995718 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:41:44 crc kubenswrapper[4814]: E0216 10:41:44.996785 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:41:53 crc kubenswrapper[4814]: I0216 10:41:53.993817 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:41:53 crc kubenswrapper[4814]: E0216 10:41:53.994662 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:41:58 crc kubenswrapper[4814]: I0216 10:41:58.994231 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:41:58 crc kubenswrapper[4814]: E0216 10:41:58.995607 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:42:04 crc kubenswrapper[4814]: I0216 10:42:04.993878 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:42:04 crc kubenswrapper[4814]: E0216 10:42:04.995283 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:42:09 crc kubenswrapper[4814]: I0216 10:42:09.993902 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:42:09 crc kubenswrapper[4814]: E0216 10:42:09.994980 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:42:16 crc kubenswrapper[4814]: I0216 10:42:16.994409 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:42:16 crc kubenswrapper[4814]: E0216 10:42:16.995715 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:42:24 crc kubenswrapper[4814]: I0216 10:42:24.995145 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:42:24 crc kubenswrapper[4814]: E0216 10:42:24.996646 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:42:27 crc kubenswrapper[4814]: I0216 10:42:27.995164 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:42:27 crc kubenswrapper[4814]: E0216 10:42:27.996409 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:42:36 crc kubenswrapper[4814]: I0216 10:42:36.994274 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:42:36 crc kubenswrapper[4814]: E0216 10:42:36.995354 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:42:38 crc kubenswrapper[4814]: I0216 10:42:38.994108 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:42:39 crc kubenswrapper[4814]: I0216 10:42:39.760409 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"78bc2ac7675229e3b5bc3598401a9ab950d0c9ef81d2f989d2d69c741aa8413c"} Feb 16 10:42:50 crc kubenswrapper[4814]: I0216 10:42:50.993411 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:42:50 crc kubenswrapper[4814]: E0216 10:42:50.994510 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:43:03 crc kubenswrapper[4814]: I0216 10:43:03.001068 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:43:03 crc kubenswrapper[4814]: E0216 10:43:03.002215 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:43:13 crc kubenswrapper[4814]: I0216 10:43:13.994632 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:43:13 crc kubenswrapper[4814]: E0216 10:43:13.995972 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:43:28 crc kubenswrapper[4814]: I0216 10:43:28.994642 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:43:28 crc kubenswrapper[4814]: E0216 10:43:28.995757 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:43:41 crc kubenswrapper[4814]: I0216 10:43:41.994320 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:43:41 crc kubenswrapper[4814]: E0216 10:43:41.995629 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:43:53 crc kubenswrapper[4814]: I0216 10:43:53.994280 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:43:53 crc kubenswrapper[4814]: E0216 10:43:53.995467 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:44:07 crc kubenswrapper[4814]: I0216 10:44:07.994226 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:44:07 crc kubenswrapper[4814]: E0216 10:44:07.995228 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:44:21 crc kubenswrapper[4814]: I0216 10:44:21.993469 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:44:21 crc kubenswrapper[4814]: E0216 10:44:21.994951 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:44:33 crc kubenswrapper[4814]: I0216 10:44:33.995071 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:44:33 crc kubenswrapper[4814]: E0216 10:44:33.997140 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.563525 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pnm4b"] Feb 16 10:44:40 crc kubenswrapper[4814]: E0216 10:44:40.566786 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586c020-a841-4f7c-83d7-f4d06254119c" containerName="registry-server" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.566820 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586c020-a841-4f7c-83d7-f4d06254119c" containerName="registry-server" Feb 16 10:44:40 crc kubenswrapper[4814]: E0216 10:44:40.566851 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586c020-a841-4f7c-83d7-f4d06254119c" containerName="extract-utilities" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.566861 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586c020-a841-4f7c-83d7-f4d06254119c" containerName="extract-utilities" Feb 16 10:44:40 crc kubenswrapper[4814]: E0216 10:44:40.566882 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4586c020-a841-4f7c-83d7-f4d06254119c" containerName="extract-content" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.566891 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4586c020-a841-4f7c-83d7-f4d06254119c" containerName="extract-content" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.567198 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4586c020-a841-4f7c-83d7-f4d06254119c" containerName="registry-server" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.571301 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.587757 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnm4b"] Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.621163 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-utilities\") pod \"redhat-marketplace-pnm4b\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.621273 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-catalog-content\") pod \"redhat-marketplace-pnm4b\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.621335 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k6qq\" (UniqueName: \"kubernetes.io/projected/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-kube-api-access-9k6qq\") pod \"redhat-marketplace-pnm4b\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.723683 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k6qq\" (UniqueName: \"kubernetes.io/projected/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-kube-api-access-9k6qq\") pod \"redhat-marketplace-pnm4b\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.723908 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-utilities\") pod \"redhat-marketplace-pnm4b\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.723978 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-catalog-content\") pod \"redhat-marketplace-pnm4b\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.724816 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-catalog-content\") pod \"redhat-marketplace-pnm4b\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.724925 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-utilities\") pod \"redhat-marketplace-pnm4b\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.758290 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k6qq\" (UniqueName: \"kubernetes.io/projected/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-kube-api-access-9k6qq\") pod \"redhat-marketplace-pnm4b\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:40 crc kubenswrapper[4814]: I0216 10:44:40.901436 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:41 crc kubenswrapper[4814]: I0216 10:44:41.498995 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnm4b"] Feb 16 10:44:42 crc kubenswrapper[4814]: I0216 10:44:42.257165 4814 generic.go:334] "Generic (PLEG): container finished" podID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" containerID="72fc8c9e148b0bc10ee87303104d92fec4f4e9acc5eafe62cc04c1e537a0c4aa" exitCode=0 Feb 16 10:44:42 crc kubenswrapper[4814]: I0216 10:44:42.257311 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnm4b" event={"ID":"6e667138-62b1-42f2-8a4e-a4428b7fe3c9","Type":"ContainerDied","Data":"72fc8c9e148b0bc10ee87303104d92fec4f4e9acc5eafe62cc04c1e537a0c4aa"} Feb 16 10:44:42 crc kubenswrapper[4814]: I0216 10:44:42.257681 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnm4b" event={"ID":"6e667138-62b1-42f2-8a4e-a4428b7fe3c9","Type":"ContainerStarted","Data":"7ff59995ac05fdfde3919e06410f25480f93e8308c63dc7fcf9f1b0544930c86"} Feb 16 10:44:43 crc kubenswrapper[4814]: I0216 10:44:43.274997 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnm4b" event={"ID":"6e667138-62b1-42f2-8a4e-a4428b7fe3c9","Type":"ContainerStarted","Data":"d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329"} Feb 16 10:44:44 crc kubenswrapper[4814]: I0216 10:44:44.288662 4814 generic.go:334] "Generic (PLEG): container finished" podID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" containerID="d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329" exitCode=0 Feb 16 10:44:44 crc kubenswrapper[4814]: I0216 10:44:44.288753 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnm4b" event={"ID":"6e667138-62b1-42f2-8a4e-a4428b7fe3c9","Type":"ContainerDied","Data":"d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329"} Feb 16 10:44:45 crc kubenswrapper[4814]: I0216 10:44:45.311442 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnm4b" event={"ID":"6e667138-62b1-42f2-8a4e-a4428b7fe3c9","Type":"ContainerStarted","Data":"04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e"} Feb 16 10:44:45 crc kubenswrapper[4814]: I0216 10:44:45.336457 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pnm4b" podStartSLOduration=2.929705556 podStartE2EDuration="5.336432032s" podCreationTimestamp="2026-02-16 10:44:40 +0000 UTC" firstStartedPulling="2026-02-16 10:44:42.260395207 +0000 UTC m=+3539.953551417" lastFinishedPulling="2026-02-16 10:44:44.667121673 +0000 UTC m=+3542.360277893" observedRunningTime="2026-02-16 10:44:45.33303716 +0000 UTC m=+3543.026193370" watchObservedRunningTime="2026-02-16 10:44:45.336432032 +0000 UTC m=+3543.029588242" Feb 16 10:44:48 crc kubenswrapper[4814]: I0216 10:44:48.994380 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:44:48 crc kubenswrapper[4814]: E0216 10:44:48.995773 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:44:50 crc kubenswrapper[4814]: I0216 10:44:50.901888 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:50 crc kubenswrapper[4814]: I0216 10:44:50.902357 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:51 crc kubenswrapper[4814]: I0216 10:44:51.013772 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:51 crc kubenswrapper[4814]: I0216 10:44:51.489677 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:51 crc kubenswrapper[4814]: I0216 10:44:51.600772 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnm4b"] Feb 16 10:44:53 crc kubenswrapper[4814]: I0216 10:44:53.427150 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pnm4b" podUID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" containerName="registry-server" containerID="cri-o://04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e" gracePeriod=2 Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.011137 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.188978 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-catalog-content\") pod \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.189132 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k6qq\" (UniqueName: \"kubernetes.io/projected/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-kube-api-access-9k6qq\") pod \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.189156 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-utilities\") pod \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\" (UID: \"6e667138-62b1-42f2-8a4e-a4428b7fe3c9\") " Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.191247 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-utilities" (OuterVolumeSpecName: "utilities") pod "6e667138-62b1-42f2-8a4e-a4428b7fe3c9" (UID: "6e667138-62b1-42f2-8a4e-a4428b7fe3c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.199914 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-kube-api-access-9k6qq" (OuterVolumeSpecName: "kube-api-access-9k6qq") pod "6e667138-62b1-42f2-8a4e-a4428b7fe3c9" (UID: "6e667138-62b1-42f2-8a4e-a4428b7fe3c9"). InnerVolumeSpecName "kube-api-access-9k6qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.217620 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e667138-62b1-42f2-8a4e-a4428b7fe3c9" (UID: "6e667138-62b1-42f2-8a4e-a4428b7fe3c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.293052 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.293127 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k6qq\" (UniqueName: \"kubernetes.io/projected/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-kube-api-access-9k6qq\") on node \"crc\" DevicePath \"\"" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.293159 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e667138-62b1-42f2-8a4e-a4428b7fe3c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.439789 4814 generic.go:334] "Generic (PLEG): container finished" podID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" containerID="04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e" exitCode=0 Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.439856 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnm4b" event={"ID":"6e667138-62b1-42f2-8a4e-a4428b7fe3c9","Type":"ContainerDied","Data":"04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e"} Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.439865 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pnm4b" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.439907 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pnm4b" event={"ID":"6e667138-62b1-42f2-8a4e-a4428b7fe3c9","Type":"ContainerDied","Data":"7ff59995ac05fdfde3919e06410f25480f93e8308c63dc7fcf9f1b0544930c86"} Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.439939 4814 scope.go:117] "RemoveContainer" containerID="04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.492630 4814 scope.go:117] "RemoveContainer" containerID="d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.492671 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnm4b"] Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.506568 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pnm4b"] Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.515237 4814 scope.go:117] "RemoveContainer" containerID="72fc8c9e148b0bc10ee87303104d92fec4f4e9acc5eafe62cc04c1e537a0c4aa" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.617829 4814 scope.go:117] "RemoveContainer" containerID="04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e" Feb 16 10:44:54 crc kubenswrapper[4814]: E0216 10:44:54.618598 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e\": container with ID starting with 04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e not found: ID does not exist" containerID="04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.618662 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e"} err="failed to get container status \"04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e\": rpc error: code = NotFound desc = could not find container \"04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e\": container with ID starting with 04e858a662d4824cc142b360e4d9634172b3483f1028b5b878b789430c71dd9e not found: ID does not exist" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.618702 4814 scope.go:117] "RemoveContainer" containerID="d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329" Feb 16 10:44:54 crc kubenswrapper[4814]: E0216 10:44:54.619114 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329\": container with ID starting with d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329 not found: ID does not exist" containerID="d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.619186 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329"} err="failed to get container status \"d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329\": rpc error: code = NotFound desc = could not find container \"d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329\": container with ID starting with d338c30fc224a90958e9ee92d24b114d2e910adc121e6e926bd729c0ea327329 not found: ID does not exist" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.619241 4814 scope.go:117] "RemoveContainer" containerID="72fc8c9e148b0bc10ee87303104d92fec4f4e9acc5eafe62cc04c1e537a0c4aa" Feb 16 10:44:54 crc kubenswrapper[4814]: E0216 10:44:54.619658 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72fc8c9e148b0bc10ee87303104d92fec4f4e9acc5eafe62cc04c1e537a0c4aa\": container with ID starting with 72fc8c9e148b0bc10ee87303104d92fec4f4e9acc5eafe62cc04c1e537a0c4aa not found: ID does not exist" containerID="72fc8c9e148b0bc10ee87303104d92fec4f4e9acc5eafe62cc04c1e537a0c4aa" Feb 16 10:44:54 crc kubenswrapper[4814]: I0216 10:44:54.619712 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72fc8c9e148b0bc10ee87303104d92fec4f4e9acc5eafe62cc04c1e537a0c4aa"} err="failed to get container status \"72fc8c9e148b0bc10ee87303104d92fec4f4e9acc5eafe62cc04c1e537a0c4aa\": rpc error: code = NotFound desc = could not find container \"72fc8c9e148b0bc10ee87303104d92fec4f4e9acc5eafe62cc04c1e537a0c4aa\": container with ID starting with 72fc8c9e148b0bc10ee87303104d92fec4f4e9acc5eafe62cc04c1e537a0c4aa not found: ID does not exist" Feb 16 10:44:55 crc kubenswrapper[4814]: I0216 10:44:55.025455 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" path="/var/lib/kubelet/pods/6e667138-62b1-42f2-8a4e-a4428b7fe3c9/volumes" Feb 16 10:44:59 crc kubenswrapper[4814]: I0216 10:44:59.994206 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:44:59 crc kubenswrapper[4814]: E0216 10:44:59.995350 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.147358 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9"] Feb 16 10:45:00 crc kubenswrapper[4814]: E0216 10:45:00.147802 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" containerName="extract-content" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.148170 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" containerName="extract-content" Feb 16 10:45:00 crc kubenswrapper[4814]: E0216 10:45:00.148215 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" containerName="extract-utilities" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.148222 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" containerName="extract-utilities" Feb 16 10:45:00 crc kubenswrapper[4814]: E0216 10:45:00.148234 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" containerName="registry-server" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.148240 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" containerName="registry-server" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.148426 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e667138-62b1-42f2-8a4e-a4428b7fe3c9" containerName="registry-server" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.149125 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.152009 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.153386 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.162456 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9"] Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.235592 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52d7eefc-253c-405b-ae86-1a166ccd04f1-config-volume\") pod \"collect-profiles-29520645-q5st9\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.235705 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52d7eefc-253c-405b-ae86-1a166ccd04f1-secret-volume\") pod \"collect-profiles-29520645-q5st9\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.235788 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfz24\" (UniqueName: \"kubernetes.io/projected/52d7eefc-253c-405b-ae86-1a166ccd04f1-kube-api-access-nfz24\") pod \"collect-profiles-29520645-q5st9\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.338917 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfz24\" (UniqueName: \"kubernetes.io/projected/52d7eefc-253c-405b-ae86-1a166ccd04f1-kube-api-access-nfz24\") pod \"collect-profiles-29520645-q5st9\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.339084 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52d7eefc-253c-405b-ae86-1a166ccd04f1-config-volume\") pod \"collect-profiles-29520645-q5st9\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.339145 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52d7eefc-253c-405b-ae86-1a166ccd04f1-secret-volume\") pod \"collect-profiles-29520645-q5st9\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.340496 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52d7eefc-253c-405b-ae86-1a166ccd04f1-config-volume\") pod \"collect-profiles-29520645-q5st9\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.359595 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52d7eefc-253c-405b-ae86-1a166ccd04f1-secret-volume\") pod \"collect-profiles-29520645-q5st9\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.370613 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfz24\" (UniqueName: \"kubernetes.io/projected/52d7eefc-253c-405b-ae86-1a166ccd04f1-kube-api-access-nfz24\") pod \"collect-profiles-29520645-q5st9\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:00 crc kubenswrapper[4814]: I0216 10:45:00.478635 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:01 crc kubenswrapper[4814]: I0216 10:45:01.026846 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9"] Feb 16 10:45:01 crc kubenswrapper[4814]: I0216 10:45:01.515247 4814 generic.go:334] "Generic (PLEG): container finished" podID="52d7eefc-253c-405b-ae86-1a166ccd04f1" containerID="d8c3b878e4bab840edf4e64b77d4c585481186b729c890f5576c29c623c47249" exitCode=0 Feb 16 10:45:01 crc kubenswrapper[4814]: I0216 10:45:01.515359 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" event={"ID":"52d7eefc-253c-405b-ae86-1a166ccd04f1","Type":"ContainerDied","Data":"d8c3b878e4bab840edf4e64b77d4c585481186b729c890f5576c29c623c47249"} Feb 16 10:45:01 crc kubenswrapper[4814]: I0216 10:45:01.515745 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" event={"ID":"52d7eefc-253c-405b-ae86-1a166ccd04f1","Type":"ContainerStarted","Data":"f2e025bbd8edd56ea22e3bad0928eb0c83e0683ce85ca8f2e6496dd03b1df707"} Feb 16 10:45:02 crc kubenswrapper[4814]: I0216 10:45:02.919335 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:02 crc kubenswrapper[4814]: I0216 10:45:02.940935 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfz24\" (UniqueName: \"kubernetes.io/projected/52d7eefc-253c-405b-ae86-1a166ccd04f1-kube-api-access-nfz24\") pod \"52d7eefc-253c-405b-ae86-1a166ccd04f1\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " Feb 16 10:45:02 crc kubenswrapper[4814]: I0216 10:45:02.941342 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52d7eefc-253c-405b-ae86-1a166ccd04f1-config-volume\") pod \"52d7eefc-253c-405b-ae86-1a166ccd04f1\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " Feb 16 10:45:02 crc kubenswrapper[4814]: I0216 10:45:02.941455 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52d7eefc-253c-405b-ae86-1a166ccd04f1-secret-volume\") pod \"52d7eefc-253c-405b-ae86-1a166ccd04f1\" (UID: \"52d7eefc-253c-405b-ae86-1a166ccd04f1\") " Feb 16 10:45:02 crc kubenswrapper[4814]: I0216 10:45:02.942635 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52d7eefc-253c-405b-ae86-1a166ccd04f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "52d7eefc-253c-405b-ae86-1a166ccd04f1" (UID: "52d7eefc-253c-405b-ae86-1a166ccd04f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 10:45:02 crc kubenswrapper[4814]: I0216 10:45:02.950847 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d7eefc-253c-405b-ae86-1a166ccd04f1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "52d7eefc-253c-405b-ae86-1a166ccd04f1" (UID: "52d7eefc-253c-405b-ae86-1a166ccd04f1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 10:45:02 crc kubenswrapper[4814]: I0216 10:45:02.953000 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d7eefc-253c-405b-ae86-1a166ccd04f1-kube-api-access-nfz24" (OuterVolumeSpecName: "kube-api-access-nfz24") pod "52d7eefc-253c-405b-ae86-1a166ccd04f1" (UID: "52d7eefc-253c-405b-ae86-1a166ccd04f1"). InnerVolumeSpecName "kube-api-access-nfz24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:45:03 crc kubenswrapper[4814]: I0216 10:45:03.044944 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52d7eefc-253c-405b-ae86-1a166ccd04f1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 10:45:03 crc kubenswrapper[4814]: I0216 10:45:03.044980 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52d7eefc-253c-405b-ae86-1a166ccd04f1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 10:45:03 crc kubenswrapper[4814]: I0216 10:45:03.044990 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfz24\" (UniqueName: \"kubernetes.io/projected/52d7eefc-253c-405b-ae86-1a166ccd04f1-kube-api-access-nfz24\") on node \"crc\" DevicePath \"\"" Feb 16 10:45:03 crc kubenswrapper[4814]: I0216 10:45:03.541738 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" event={"ID":"52d7eefc-253c-405b-ae86-1a166ccd04f1","Type":"ContainerDied","Data":"f2e025bbd8edd56ea22e3bad0928eb0c83e0683ce85ca8f2e6496dd03b1df707"} Feb 16 10:45:03 crc kubenswrapper[4814]: I0216 10:45:03.541948 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2e025bbd8edd56ea22e3bad0928eb0c83e0683ce85ca8f2e6496dd03b1df707" Feb 16 10:45:03 crc kubenswrapper[4814]: I0216 10:45:03.541859 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520645-q5st9" Feb 16 10:45:04 crc kubenswrapper[4814]: I0216 10:45:04.032593 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx"] Feb 16 10:45:04 crc kubenswrapper[4814]: I0216 10:45:04.040751 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520600-xhvwx"] Feb 16 10:45:05 crc kubenswrapper[4814]: I0216 10:45:05.036704 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ae10c9-d249-4455-a8ba-1ceef545a1b9" path="/var/lib/kubelet/pods/f6ae10c9-d249-4455-a8ba-1ceef545a1b9/volumes" Feb 16 10:45:07 crc kubenswrapper[4814]: I0216 10:45:07.960079 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:45:07 crc kubenswrapper[4814]: I0216 10:45:07.960587 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:45:13 crc kubenswrapper[4814]: I0216 10:45:13.015100 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:45:13 crc kubenswrapper[4814]: E0216 10:45:13.016476 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:45:23 crc kubenswrapper[4814]: I0216 10:45:23.994367 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:45:23 crc kubenswrapper[4814]: E0216 10:45:23.995946 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:45:37 crc kubenswrapper[4814]: I0216 10:45:37.959978 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:45:37 crc kubenswrapper[4814]: I0216 10:45:37.960895 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:45:38 crc kubenswrapper[4814]: I0216 10:45:38.998900 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:45:39 crc kubenswrapper[4814]: I0216 10:45:39.988150 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a"} Feb 16 10:45:42 crc kubenswrapper[4814]: I0216 10:45:42.677816 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:45:43 crc kubenswrapper[4814]: I0216 10:45:43.047606 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" exitCode=0 Feb 16 10:45:43 crc kubenswrapper[4814]: I0216 10:45:43.048033 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a"} Feb 16 10:45:43 crc kubenswrapper[4814]: I0216 10:45:43.048086 4814 scope.go:117] "RemoveContainer" containerID="8e063b2f5eec8a9368883f2996b24f38351f451dfb009de8776f60ecc29b2afb" Feb 16 10:45:43 crc kubenswrapper[4814]: I0216 10:45:43.049313 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:45:43 crc kubenswrapper[4814]: E0216 10:45:43.049928 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:45:45 crc kubenswrapper[4814]: I0216 10:45:45.677086 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:45:45 crc kubenswrapper[4814]: I0216 10:45:45.678656 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:45:45 crc kubenswrapper[4814]: E0216 10:45:45.679001 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:45:47 crc kubenswrapper[4814]: I0216 10:45:47.677054 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:45:47 crc kubenswrapper[4814]: I0216 10:45:47.679043 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:45:47 crc kubenswrapper[4814]: E0216 10:45:47.679430 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:45:53 crc kubenswrapper[4814]: I0216 10:45:53.084733 4814 scope.go:117] "RemoveContainer" containerID="543910e54dd85643b4dbd4de839b4134cd70692fd0accd647f989f7d744b024f" Feb 16 10:46:03 crc kubenswrapper[4814]: I0216 10:46:03.001883 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:46:03 crc kubenswrapper[4814]: E0216 10:46:03.003093 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:46:07 crc kubenswrapper[4814]: I0216 10:46:07.984467 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:46:07 crc kubenswrapper[4814]: I0216 10:46:07.988128 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:46:07 crc kubenswrapper[4814]: I0216 10:46:07.988475 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 10:46:07 crc kubenswrapper[4814]: I0216 10:46:07.989755 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78bc2ac7675229e3b5bc3598401a9ab950d0c9ef81d2f989d2d69c741aa8413c"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 10:46:07 crc kubenswrapper[4814]: I0216 10:46:07.989915 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://78bc2ac7675229e3b5bc3598401a9ab950d0c9ef81d2f989d2d69c741aa8413c" gracePeriod=600 Feb 16 10:46:08 crc kubenswrapper[4814]: I0216 10:46:08.365351 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="78bc2ac7675229e3b5bc3598401a9ab950d0c9ef81d2f989d2d69c741aa8413c" exitCode=0 Feb 16 10:46:08 crc kubenswrapper[4814]: I0216 10:46:08.365451 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"78bc2ac7675229e3b5bc3598401a9ab950d0c9ef81d2f989d2d69c741aa8413c"} Feb 16 10:46:08 crc kubenswrapper[4814]: I0216 10:46:08.365924 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27"} Feb 16 10:46:08 crc kubenswrapper[4814]: I0216 10:46:08.365963 4814 scope.go:117] "RemoveContainer" containerID="d31fba8d307995b186cffbe5d0df5f4837b49e4af31d9638e6ec86febdf67fd7" Feb 16 10:46:15 crc kubenswrapper[4814]: I0216 10:46:15.994247 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:46:15 crc kubenswrapper[4814]: E0216 10:46:15.995506 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:46:27 crc kubenswrapper[4814]: I0216 10:46:27.993242 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:46:27 crc kubenswrapper[4814]: E0216 10:46:27.994483 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:46:39 crc kubenswrapper[4814]: I0216 10:46:39.994068 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:46:39 crc kubenswrapper[4814]: E0216 10:46:39.995308 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:46:50 crc kubenswrapper[4814]: I0216 10:46:50.994206 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:46:50 crc kubenswrapper[4814]: E0216 10:46:50.995765 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:47:01 crc kubenswrapper[4814]: I0216 10:47:01.993379 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:47:01 crc kubenswrapper[4814]: E0216 10:47:01.994511 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:47:14 crc kubenswrapper[4814]: I0216 10:47:14.995657 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:47:14 crc kubenswrapper[4814]: E0216 10:47:14.997312 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.610144 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qpqzq"] Feb 16 10:47:24 crc kubenswrapper[4814]: E0216 10:47:24.611428 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d7eefc-253c-405b-ae86-1a166ccd04f1" containerName="collect-profiles" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.611443 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d7eefc-253c-405b-ae86-1a166ccd04f1" containerName="collect-profiles" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.611685 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d7eefc-253c-405b-ae86-1a166ccd04f1" containerName="collect-profiles" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.613135 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.632009 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qpqzq"] Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.712712 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f29rl\" (UniqueName: \"kubernetes.io/projected/2ab401a5-78d1-40ec-a684-c45be1175bf4-kube-api-access-f29rl\") pod \"community-operators-qpqzq\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.712820 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-utilities\") pod \"community-operators-qpqzq\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.712849 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-catalog-content\") pod \"community-operators-qpqzq\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.814958 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f29rl\" (UniqueName: \"kubernetes.io/projected/2ab401a5-78d1-40ec-a684-c45be1175bf4-kube-api-access-f29rl\") pod \"community-operators-qpqzq\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.815088 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-utilities\") pod \"community-operators-qpqzq\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.815120 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-catalog-content\") pod \"community-operators-qpqzq\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.815975 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-catalog-content\") pod \"community-operators-qpqzq\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.816026 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-utilities\") pod \"community-operators-qpqzq\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.836667 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f29rl\" (UniqueName: \"kubernetes.io/projected/2ab401a5-78d1-40ec-a684-c45be1175bf4-kube-api-access-f29rl\") pod \"community-operators-qpqzq\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:24 crc kubenswrapper[4814]: I0216 10:47:24.937187 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:25 crc kubenswrapper[4814]: I0216 10:47:25.571006 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qpqzq"] Feb 16 10:47:25 crc kubenswrapper[4814]: I0216 10:47:25.994170 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:47:25 crc kubenswrapper[4814]: E0216 10:47:25.994913 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:47:26 crc kubenswrapper[4814]: I0216 10:47:26.228919 4814 generic.go:334] "Generic (PLEG): container finished" podID="2ab401a5-78d1-40ec-a684-c45be1175bf4" containerID="652009de17216c6d8fcc74ce7fa93aca7254be8b5c2c96288f0aff9ca0b67e8e" exitCode=0 Feb 16 10:47:26 crc kubenswrapper[4814]: I0216 10:47:26.229014 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpqzq" event={"ID":"2ab401a5-78d1-40ec-a684-c45be1175bf4","Type":"ContainerDied","Data":"652009de17216c6d8fcc74ce7fa93aca7254be8b5c2c96288f0aff9ca0b67e8e"} Feb 16 10:47:26 crc kubenswrapper[4814]: I0216 10:47:26.229098 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpqzq" event={"ID":"2ab401a5-78d1-40ec-a684-c45be1175bf4","Type":"ContainerStarted","Data":"e2adb571d32659c7a956f0cfd32901ffac2b3ddd4793cac03556f4b27cd156c0"} Feb 16 10:47:26 crc kubenswrapper[4814]: I0216 10:47:26.231134 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 10:47:28 crc kubenswrapper[4814]: I0216 10:47:28.249809 4814 generic.go:334] "Generic (PLEG): container finished" podID="2ab401a5-78d1-40ec-a684-c45be1175bf4" containerID="55250848f7eaeb4232a5bb08e63e60d169e4febcfd061967ab994baa24e1b9f8" exitCode=0 Feb 16 10:47:28 crc kubenswrapper[4814]: I0216 10:47:28.249907 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpqzq" event={"ID":"2ab401a5-78d1-40ec-a684-c45be1175bf4","Type":"ContainerDied","Data":"55250848f7eaeb4232a5bb08e63e60d169e4febcfd061967ab994baa24e1b9f8"} Feb 16 10:47:29 crc kubenswrapper[4814]: I0216 10:47:29.281103 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpqzq" event={"ID":"2ab401a5-78d1-40ec-a684-c45be1175bf4","Type":"ContainerStarted","Data":"1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691"} Feb 16 10:47:29 crc kubenswrapper[4814]: I0216 10:47:29.312277 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qpqzq" podStartSLOduration=2.805545053 podStartE2EDuration="5.312257444s" podCreationTimestamp="2026-02-16 10:47:24 +0000 UTC" firstStartedPulling="2026-02-16 10:47:26.230841654 +0000 UTC m=+3703.923997834" lastFinishedPulling="2026-02-16 10:47:28.737554045 +0000 UTC m=+3706.430710225" observedRunningTime="2026-02-16 10:47:29.302815598 +0000 UTC m=+3706.995971808" watchObservedRunningTime="2026-02-16 10:47:29.312257444 +0000 UTC m=+3707.005413624" Feb 16 10:47:34 crc kubenswrapper[4814]: I0216 10:47:34.938139 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:34 crc kubenswrapper[4814]: I0216 10:47:34.939143 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:35 crc kubenswrapper[4814]: I0216 10:47:35.027201 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:35 crc kubenswrapper[4814]: I0216 10:47:35.405904 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:35 crc kubenswrapper[4814]: I0216 10:47:35.461986 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qpqzq"] Feb 16 10:47:37 crc kubenswrapper[4814]: I0216 10:47:37.365644 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qpqzq" podUID="2ab401a5-78d1-40ec-a684-c45be1175bf4" containerName="registry-server" containerID="cri-o://1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691" gracePeriod=2 Feb 16 10:47:37 crc kubenswrapper[4814]: I0216 10:47:37.948003 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:37 crc kubenswrapper[4814]: I0216 10:47:37.994028 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:47:37 crc kubenswrapper[4814]: E0216 10:47:37.994961 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.142986 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-utilities\") pod \"2ab401a5-78d1-40ec-a684-c45be1175bf4\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.143066 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-catalog-content\") pod \"2ab401a5-78d1-40ec-a684-c45be1175bf4\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.143188 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f29rl\" (UniqueName: \"kubernetes.io/projected/2ab401a5-78d1-40ec-a684-c45be1175bf4-kube-api-access-f29rl\") pod \"2ab401a5-78d1-40ec-a684-c45be1175bf4\" (UID: \"2ab401a5-78d1-40ec-a684-c45be1175bf4\") " Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.145094 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-utilities" (OuterVolumeSpecName: "utilities") pod "2ab401a5-78d1-40ec-a684-c45be1175bf4" (UID: "2ab401a5-78d1-40ec-a684-c45be1175bf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.162477 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab401a5-78d1-40ec-a684-c45be1175bf4-kube-api-access-f29rl" (OuterVolumeSpecName: "kube-api-access-f29rl") pod "2ab401a5-78d1-40ec-a684-c45be1175bf4" (UID: "2ab401a5-78d1-40ec-a684-c45be1175bf4"). InnerVolumeSpecName "kube-api-access-f29rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.216789 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ab401a5-78d1-40ec-a684-c45be1175bf4" (UID: "2ab401a5-78d1-40ec-a684-c45be1175bf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.245177 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.245250 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ab401a5-78d1-40ec-a684-c45be1175bf4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.245264 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f29rl\" (UniqueName: \"kubernetes.io/projected/2ab401a5-78d1-40ec-a684-c45be1175bf4-kube-api-access-f29rl\") on node \"crc\" DevicePath \"\"" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.378213 4814 generic.go:334] "Generic (PLEG): container finished" podID="2ab401a5-78d1-40ec-a684-c45be1175bf4" containerID="1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691" exitCode=0 Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.378267 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpqzq" event={"ID":"2ab401a5-78d1-40ec-a684-c45be1175bf4","Type":"ContainerDied","Data":"1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691"} Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.378304 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qpqzq" event={"ID":"2ab401a5-78d1-40ec-a684-c45be1175bf4","Type":"ContainerDied","Data":"e2adb571d32659c7a956f0cfd32901ffac2b3ddd4793cac03556f4b27cd156c0"} Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.378327 4814 scope.go:117] "RemoveContainer" containerID="1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.378338 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qpqzq" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.412646 4814 scope.go:117] "RemoveContainer" containerID="55250848f7eaeb4232a5bb08e63e60d169e4febcfd061967ab994baa24e1b9f8" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.436809 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qpqzq"] Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.443997 4814 scope.go:117] "RemoveContainer" containerID="652009de17216c6d8fcc74ce7fa93aca7254be8b5c2c96288f0aff9ca0b67e8e" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.452592 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qpqzq"] Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.494286 4814 scope.go:117] "RemoveContainer" containerID="1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691" Feb 16 10:47:38 crc kubenswrapper[4814]: E0216 10:47:38.499406 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691\": container with ID starting with 1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691 not found: ID does not exist" containerID="1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.499460 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691"} err="failed to get container status \"1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691\": rpc error: code = NotFound desc = could not find container \"1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691\": container with ID starting with 1515994e69bf88b5345bdd01e41146f7707d3aaa974c4b43e90eefc9983a9691 not found: ID does not exist" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.499492 4814 scope.go:117] "RemoveContainer" containerID="55250848f7eaeb4232a5bb08e63e60d169e4febcfd061967ab994baa24e1b9f8" Feb 16 10:47:38 crc kubenswrapper[4814]: E0216 10:47:38.501201 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55250848f7eaeb4232a5bb08e63e60d169e4febcfd061967ab994baa24e1b9f8\": container with ID starting with 55250848f7eaeb4232a5bb08e63e60d169e4febcfd061967ab994baa24e1b9f8 not found: ID does not exist" containerID="55250848f7eaeb4232a5bb08e63e60d169e4febcfd061967ab994baa24e1b9f8" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.501269 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55250848f7eaeb4232a5bb08e63e60d169e4febcfd061967ab994baa24e1b9f8"} err="failed to get container status \"55250848f7eaeb4232a5bb08e63e60d169e4febcfd061967ab994baa24e1b9f8\": rpc error: code = NotFound desc = could not find container \"55250848f7eaeb4232a5bb08e63e60d169e4febcfd061967ab994baa24e1b9f8\": container with ID starting with 55250848f7eaeb4232a5bb08e63e60d169e4febcfd061967ab994baa24e1b9f8 not found: ID does not exist" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.501346 4814 scope.go:117] "RemoveContainer" containerID="652009de17216c6d8fcc74ce7fa93aca7254be8b5c2c96288f0aff9ca0b67e8e" Feb 16 10:47:38 crc kubenswrapper[4814]: E0216 10:47:38.502566 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"652009de17216c6d8fcc74ce7fa93aca7254be8b5c2c96288f0aff9ca0b67e8e\": container with ID starting with 652009de17216c6d8fcc74ce7fa93aca7254be8b5c2c96288f0aff9ca0b67e8e not found: ID does not exist" containerID="652009de17216c6d8fcc74ce7fa93aca7254be8b5c2c96288f0aff9ca0b67e8e" Feb 16 10:47:38 crc kubenswrapper[4814]: I0216 10:47:38.502596 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"652009de17216c6d8fcc74ce7fa93aca7254be8b5c2c96288f0aff9ca0b67e8e"} err="failed to get container status \"652009de17216c6d8fcc74ce7fa93aca7254be8b5c2c96288f0aff9ca0b67e8e\": rpc error: code = NotFound desc = could not find container \"652009de17216c6d8fcc74ce7fa93aca7254be8b5c2c96288f0aff9ca0b67e8e\": container with ID starting with 652009de17216c6d8fcc74ce7fa93aca7254be8b5c2c96288f0aff9ca0b67e8e not found: ID does not exist" Feb 16 10:47:39 crc kubenswrapper[4814]: I0216 10:47:39.006910 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab401a5-78d1-40ec-a684-c45be1175bf4" path="/var/lib/kubelet/pods/2ab401a5-78d1-40ec-a684-c45be1175bf4/volumes" Feb 16 10:47:49 crc kubenswrapper[4814]: I0216 10:47:49.994831 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:47:49 crc kubenswrapper[4814]: E0216 10:47:49.996304 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:48:04 crc kubenswrapper[4814]: I0216 10:48:04.995460 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:48:04 crc kubenswrapper[4814]: E0216 10:48:04.997013 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:48:18 crc kubenswrapper[4814]: I0216 10:48:18.994389 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:48:18 crc kubenswrapper[4814]: E0216 10:48:18.995709 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:48:33 crc kubenswrapper[4814]: I0216 10:48:33.995069 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:48:33 crc kubenswrapper[4814]: E0216 10:48:33.996121 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:48:37 crc kubenswrapper[4814]: I0216 10:48:37.960020 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:48:37 crc kubenswrapper[4814]: I0216 10:48:37.961041 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:48:47 crc kubenswrapper[4814]: I0216 10:48:47.994772 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:48:47 crc kubenswrapper[4814]: E0216 10:48:47.996391 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:48:59 crc kubenswrapper[4814]: I0216 10:48:59.994082 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:48:59 crc kubenswrapper[4814]: E0216 10:48:59.995372 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:49:07 crc kubenswrapper[4814]: I0216 10:49:07.960782 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:49:07 crc kubenswrapper[4814]: I0216 10:49:07.961406 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:49:13 crc kubenswrapper[4814]: I0216 10:49:13.994973 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:49:13 crc kubenswrapper[4814]: E0216 10:49:13.996491 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:49:25 crc kubenswrapper[4814]: I0216 10:49:25.994856 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:49:25 crc kubenswrapper[4814]: E0216 10:49:25.995843 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:49:36 crc kubenswrapper[4814]: I0216 10:49:36.994552 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:49:36 crc kubenswrapper[4814]: E0216 10:49:36.995652 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:49:37 crc kubenswrapper[4814]: I0216 10:49:37.960724 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:49:37 crc kubenswrapper[4814]: I0216 10:49:37.960833 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:49:37 crc kubenswrapper[4814]: I0216 10:49:37.960910 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 10:49:37 crc kubenswrapper[4814]: I0216 10:49:37.961999 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 10:49:37 crc kubenswrapper[4814]: I0216 10:49:37.962075 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" gracePeriod=600 Feb 16 10:49:38 crc kubenswrapper[4814]: E0216 10:49:38.601341 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:49:38 crc kubenswrapper[4814]: I0216 10:49:38.692325 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" exitCode=0 Feb 16 10:49:38 crc kubenswrapper[4814]: I0216 10:49:38.692380 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27"} Feb 16 10:49:38 crc kubenswrapper[4814]: I0216 10:49:38.692423 4814 scope.go:117] "RemoveContainer" containerID="78bc2ac7675229e3b5bc3598401a9ab950d0c9ef81d2f989d2d69c741aa8413c" Feb 16 10:49:38 crc kubenswrapper[4814]: I0216 10:49:38.693224 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:49:38 crc kubenswrapper[4814]: E0216 10:49:38.693546 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:49:48 crc kubenswrapper[4814]: I0216 10:49:48.998607 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:49:49 crc kubenswrapper[4814]: E0216 10:49:48.999640 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:49:49 crc kubenswrapper[4814]: I0216 10:49:49.994523 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:49:49 crc kubenswrapper[4814]: E0216 10:49:49.995200 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.098285 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bt8bn"] Feb 16 10:49:59 crc kubenswrapper[4814]: E0216 10:49:59.099293 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab401a5-78d1-40ec-a684-c45be1175bf4" containerName="registry-server" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.099310 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab401a5-78d1-40ec-a684-c45be1175bf4" containerName="registry-server" Feb 16 10:49:59 crc kubenswrapper[4814]: E0216 10:49:59.099335 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab401a5-78d1-40ec-a684-c45be1175bf4" containerName="extract-content" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.099345 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab401a5-78d1-40ec-a684-c45be1175bf4" containerName="extract-content" Feb 16 10:49:59 crc kubenswrapper[4814]: E0216 10:49:59.099368 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab401a5-78d1-40ec-a684-c45be1175bf4" containerName="extract-utilities" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.099376 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab401a5-78d1-40ec-a684-c45be1175bf4" containerName="extract-utilities" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.099630 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab401a5-78d1-40ec-a684-c45be1175bf4" containerName="registry-server" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.101424 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.113433 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bt8bn"] Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.258906 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ghs4\" (UniqueName: \"kubernetes.io/projected/1c471689-94a7-4017-ae33-146029e7832a-kube-api-access-7ghs4\") pod \"certified-operators-bt8bn\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.259415 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-utilities\") pod \"certified-operators-bt8bn\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.259517 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-catalog-content\") pod \"certified-operators-bt8bn\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.362772 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-catalog-content\") pod \"certified-operators-bt8bn\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.362943 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ghs4\" (UniqueName: \"kubernetes.io/projected/1c471689-94a7-4017-ae33-146029e7832a-kube-api-access-7ghs4\") pod \"certified-operators-bt8bn\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.363042 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-utilities\") pod \"certified-operators-bt8bn\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.363292 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-catalog-content\") pod \"certified-operators-bt8bn\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.363648 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-utilities\") pod \"certified-operators-bt8bn\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.402500 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ghs4\" (UniqueName: \"kubernetes.io/projected/1c471689-94a7-4017-ae33-146029e7832a-kube-api-access-7ghs4\") pod \"certified-operators-bt8bn\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:49:59 crc kubenswrapper[4814]: I0216 10:49:59.447650 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:50:00 crc kubenswrapper[4814]: I0216 10:50:00.021482 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bt8bn"] Feb 16 10:50:00 crc kubenswrapper[4814]: I0216 10:50:00.932240 4814 generic.go:334] "Generic (PLEG): container finished" podID="1c471689-94a7-4017-ae33-146029e7832a" containerID="acb25696a34492b665a0ace3126cc95e2f457f327b01d310ead373dd35f05e44" exitCode=0 Feb 16 10:50:00 crc kubenswrapper[4814]: I0216 10:50:00.932304 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bt8bn" event={"ID":"1c471689-94a7-4017-ae33-146029e7832a","Type":"ContainerDied","Data":"acb25696a34492b665a0ace3126cc95e2f457f327b01d310ead373dd35f05e44"} Feb 16 10:50:00 crc kubenswrapper[4814]: I0216 10:50:00.932339 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bt8bn" event={"ID":"1c471689-94a7-4017-ae33-146029e7832a","Type":"ContainerStarted","Data":"8ded736d398e478079f4c68e8b5510635f2f20e99724d84631dea1c91b8d8faa"} Feb 16 10:50:01 crc kubenswrapper[4814]: I0216 10:50:01.994483 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:50:01 crc kubenswrapper[4814]: E0216 10:50:01.995276 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:50:02 crc kubenswrapper[4814]: I0216 10:50:02.951155 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bt8bn" event={"ID":"1c471689-94a7-4017-ae33-146029e7832a","Type":"ContainerStarted","Data":"bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646"} Feb 16 10:50:03 crc kubenswrapper[4814]: I0216 10:50:03.962718 4814 generic.go:334] "Generic (PLEG): container finished" podID="1c471689-94a7-4017-ae33-146029e7832a" containerID="bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646" exitCode=0 Feb 16 10:50:03 crc kubenswrapper[4814]: I0216 10:50:03.962822 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bt8bn" event={"ID":"1c471689-94a7-4017-ae33-146029e7832a","Type":"ContainerDied","Data":"bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646"} Feb 16 10:50:03 crc kubenswrapper[4814]: I0216 10:50:03.994641 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:50:03 crc kubenswrapper[4814]: E0216 10:50:03.994842 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:50:06 crc kubenswrapper[4814]: I0216 10:50:06.094670 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bt8bn" event={"ID":"1c471689-94a7-4017-ae33-146029e7832a","Type":"ContainerStarted","Data":"c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96"} Feb 16 10:50:06 crc kubenswrapper[4814]: I0216 10:50:06.119617 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bt8bn" podStartSLOduration=2.945706019 podStartE2EDuration="7.119599537s" podCreationTimestamp="2026-02-16 10:49:59 +0000 UTC" firstStartedPulling="2026-02-16 10:50:00.934435098 +0000 UTC m=+3858.627591278" lastFinishedPulling="2026-02-16 10:50:05.108328626 +0000 UTC m=+3862.801484796" observedRunningTime="2026-02-16 10:50:06.119199447 +0000 UTC m=+3863.812355627" watchObservedRunningTime="2026-02-16 10:50:06.119599537 +0000 UTC m=+3863.812755727" Feb 16 10:50:09 crc kubenswrapper[4814]: I0216 10:50:09.447895 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:50:09 crc kubenswrapper[4814]: I0216 10:50:09.449685 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:50:09 crc kubenswrapper[4814]: I0216 10:50:09.508102 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:50:10 crc kubenswrapper[4814]: I0216 10:50:10.177433 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:50:10 crc kubenswrapper[4814]: I0216 10:50:10.222599 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bt8bn"] Feb 16 10:50:12 crc kubenswrapper[4814]: I0216 10:50:12.162351 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bt8bn" podUID="1c471689-94a7-4017-ae33-146029e7832a" containerName="registry-server" containerID="cri-o://c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96" gracePeriod=2 Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.178134 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.179105 4814 generic.go:334] "Generic (PLEG): container finished" podID="1c471689-94a7-4017-ae33-146029e7832a" containerID="c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96" exitCode=0 Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.179156 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bt8bn" event={"ID":"1c471689-94a7-4017-ae33-146029e7832a","Type":"ContainerDied","Data":"c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96"} Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.179215 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bt8bn" event={"ID":"1c471689-94a7-4017-ae33-146029e7832a","Type":"ContainerDied","Data":"8ded736d398e478079f4c68e8b5510635f2f20e99724d84631dea1c91b8d8faa"} Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.179237 4814 scope.go:117] "RemoveContainer" containerID="c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.198648 4814 scope.go:117] "RemoveContainer" containerID="bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.232986 4814 scope.go:117] "RemoveContainer" containerID="acb25696a34492b665a0ace3126cc95e2f457f327b01d310ead373dd35f05e44" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.281321 4814 scope.go:117] "RemoveContainer" containerID="c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96" Feb 16 10:50:13 crc kubenswrapper[4814]: E0216 10:50:13.282003 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96\": container with ID starting with c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96 not found: ID does not exist" containerID="c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.282078 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96"} err="failed to get container status \"c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96\": rpc error: code = NotFound desc = could not find container \"c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96\": container with ID starting with c4ba7025300066d7b4faa3f9214243bc5a5535df400257c9af98b191278b0b96 not found: ID does not exist" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.282106 4814 scope.go:117] "RemoveContainer" containerID="bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646" Feb 16 10:50:13 crc kubenswrapper[4814]: E0216 10:50:13.282498 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646\": container with ID starting with bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646 not found: ID does not exist" containerID="bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.282577 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646"} err="failed to get container status \"bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646\": rpc error: code = NotFound desc = could not find container \"bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646\": container with ID starting with bc9f4add2365872785a7e26241182c3e23470545526e6e9b379e0f900c720646 not found: ID does not exist" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.282619 4814 scope.go:117] "RemoveContainer" containerID="acb25696a34492b665a0ace3126cc95e2f457f327b01d310ead373dd35f05e44" Feb 16 10:50:13 crc kubenswrapper[4814]: E0216 10:50:13.282984 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb25696a34492b665a0ace3126cc95e2f457f327b01d310ead373dd35f05e44\": container with ID starting with acb25696a34492b665a0ace3126cc95e2f457f327b01d310ead373dd35f05e44 not found: ID does not exist" containerID="acb25696a34492b665a0ace3126cc95e2f457f327b01d310ead373dd35f05e44" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.283015 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb25696a34492b665a0ace3126cc95e2f457f327b01d310ead373dd35f05e44"} err="failed to get container status \"acb25696a34492b665a0ace3126cc95e2f457f327b01d310ead373dd35f05e44\": rpc error: code = NotFound desc = could not find container \"acb25696a34492b665a0ace3126cc95e2f457f327b01d310ead373dd35f05e44\": container with ID starting with acb25696a34492b665a0ace3126cc95e2f457f327b01d310ead373dd35f05e44 not found: ID does not exist" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.323115 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ghs4\" (UniqueName: \"kubernetes.io/projected/1c471689-94a7-4017-ae33-146029e7832a-kube-api-access-7ghs4\") pod \"1c471689-94a7-4017-ae33-146029e7832a\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.323226 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-catalog-content\") pod \"1c471689-94a7-4017-ae33-146029e7832a\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.323486 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-utilities\") pod \"1c471689-94a7-4017-ae33-146029e7832a\" (UID: \"1c471689-94a7-4017-ae33-146029e7832a\") " Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.324825 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-utilities" (OuterVolumeSpecName: "utilities") pod "1c471689-94a7-4017-ae33-146029e7832a" (UID: "1c471689-94a7-4017-ae33-146029e7832a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.330516 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c471689-94a7-4017-ae33-146029e7832a-kube-api-access-7ghs4" (OuterVolumeSpecName: "kube-api-access-7ghs4") pod "1c471689-94a7-4017-ae33-146029e7832a" (UID: "1c471689-94a7-4017-ae33-146029e7832a"). InnerVolumeSpecName "kube-api-access-7ghs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.425611 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ghs4\" (UniqueName: \"kubernetes.io/projected/1c471689-94a7-4017-ae33-146029e7832a-kube-api-access-7ghs4\") on node \"crc\" DevicePath \"\"" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.425666 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.605502 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c471689-94a7-4017-ae33-146029e7832a" (UID: "1c471689-94a7-4017-ae33-146029e7832a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:50:13 crc kubenswrapper[4814]: I0216 10:50:13.629580 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c471689-94a7-4017-ae33-146029e7832a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:50:14 crc kubenswrapper[4814]: I0216 10:50:14.190777 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bt8bn" Feb 16 10:50:14 crc kubenswrapper[4814]: I0216 10:50:14.237287 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bt8bn"] Feb 16 10:50:14 crc kubenswrapper[4814]: I0216 10:50:14.247845 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bt8bn"] Feb 16 10:50:15 crc kubenswrapper[4814]: I0216 10:50:15.007316 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c471689-94a7-4017-ae33-146029e7832a" path="/var/lib/kubelet/pods/1c471689-94a7-4017-ae33-146029e7832a/volumes" Feb 16 10:50:15 crc kubenswrapper[4814]: I0216 10:50:15.997142 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:50:15 crc kubenswrapper[4814]: E0216 10:50:15.998351 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:50:16 crc kubenswrapper[4814]: I0216 10:50:16.994574 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:50:16 crc kubenswrapper[4814]: E0216 10:50:16.995572 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:50:28 crc kubenswrapper[4814]: I0216 10:50:28.994014 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:50:28 crc kubenswrapper[4814]: E0216 10:50:28.995215 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:50:29 crc kubenswrapper[4814]: I0216 10:50:29.994179 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:50:29 crc kubenswrapper[4814]: E0216 10:50:29.994554 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:50:41 crc kubenswrapper[4814]: I0216 10:50:41.994160 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:50:41 crc kubenswrapper[4814]: E0216 10:50:41.995127 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:50:43 crc kubenswrapper[4814]: I0216 10:50:43.001635 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:50:44 crc kubenswrapper[4814]: I0216 10:50:44.498556 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e"} Feb 16 10:50:46 crc kubenswrapper[4814]: I0216 10:50:46.517626 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" exitCode=0 Feb 16 10:50:46 crc kubenswrapper[4814]: I0216 10:50:46.517738 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e"} Feb 16 10:50:46 crc kubenswrapper[4814]: I0216 10:50:46.518141 4814 scope.go:117] "RemoveContainer" containerID="5f69e7feb63dee90fe17ac6059fb51e85d0ae5a67d4acae7da3962fb14236e3a" Feb 16 10:50:46 crc kubenswrapper[4814]: I0216 10:50:46.519237 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:50:46 crc kubenswrapper[4814]: E0216 10:50:46.519631 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:50:47 crc kubenswrapper[4814]: I0216 10:50:47.677163 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:50:47 crc kubenswrapper[4814]: I0216 10:50:47.678700 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:50:47 crc kubenswrapper[4814]: I0216 10:50:47.680212 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:50:47 crc kubenswrapper[4814]: E0216 10:50:47.680705 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:50:48 crc kubenswrapper[4814]: I0216 10:50:48.677310 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:50:48 crc kubenswrapper[4814]: I0216 10:50:48.678110 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:50:48 crc kubenswrapper[4814]: E0216 10:50:48.678353 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:50:53 crc kubenswrapper[4814]: I0216 10:50:53.993291 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:50:53 crc kubenswrapper[4814]: E0216 10:50:53.994296 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:51:00 crc kubenswrapper[4814]: I0216 10:51:00.993267 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:51:00 crc kubenswrapper[4814]: E0216 10:51:00.994306 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:51:08 crc kubenswrapper[4814]: I0216 10:51:08.994592 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:51:08 crc kubenswrapper[4814]: E0216 10:51:08.995712 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:51:11 crc kubenswrapper[4814]: I0216 10:51:11.994756 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:51:11 crc kubenswrapper[4814]: E0216 10:51:11.996177 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.057123 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mpvbq"] Feb 16 10:51:21 crc kubenswrapper[4814]: E0216 10:51:21.058519 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c471689-94a7-4017-ae33-146029e7832a" containerName="extract-content" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.058568 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c471689-94a7-4017-ae33-146029e7832a" containerName="extract-content" Feb 16 10:51:21 crc kubenswrapper[4814]: E0216 10:51:21.058608 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c471689-94a7-4017-ae33-146029e7832a" containerName="extract-utilities" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.058620 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c471689-94a7-4017-ae33-146029e7832a" containerName="extract-utilities" Feb 16 10:51:21 crc kubenswrapper[4814]: E0216 10:51:21.058659 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c471689-94a7-4017-ae33-146029e7832a" containerName="registry-server" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.058671 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c471689-94a7-4017-ae33-146029e7832a" containerName="registry-server" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.058988 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c471689-94a7-4017-ae33-146029e7832a" containerName="registry-server" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.061332 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.074241 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpvbq"] Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.152328 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-utilities\") pod \"redhat-operators-mpvbq\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.152586 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-catalog-content\") pod \"redhat-operators-mpvbq\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.152679 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt9ct\" (UniqueName: \"kubernetes.io/projected/de382392-8ad9-43a6-8651-e14708de4c73-kube-api-access-wt9ct\") pod \"redhat-operators-mpvbq\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.257079 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-catalog-content\") pod \"redhat-operators-mpvbq\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.257176 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt9ct\" (UniqueName: \"kubernetes.io/projected/de382392-8ad9-43a6-8651-e14708de4c73-kube-api-access-wt9ct\") pod \"redhat-operators-mpvbq\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.257314 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-utilities\") pod \"redhat-operators-mpvbq\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.258173 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-catalog-content\") pod \"redhat-operators-mpvbq\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.258614 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-utilities\") pod \"redhat-operators-mpvbq\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.298571 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt9ct\" (UniqueName: \"kubernetes.io/projected/de382392-8ad9-43a6-8651-e14708de4c73-kube-api-access-wt9ct\") pod \"redhat-operators-mpvbq\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.396480 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.925774 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mpvbq"] Feb 16 10:51:21 crc kubenswrapper[4814]: I0216 10:51:21.994026 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:51:21 crc kubenswrapper[4814]: E0216 10:51:21.994263 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:51:22 crc kubenswrapper[4814]: I0216 10:51:22.907509 4814 generic.go:334] "Generic (PLEG): container finished" podID="de382392-8ad9-43a6-8651-e14708de4c73" containerID="7f48869decdfec72847fb7350ef5f80c21caa3f495711610b108378254855ada" exitCode=0 Feb 16 10:51:22 crc kubenswrapper[4814]: I0216 10:51:22.907569 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpvbq" event={"ID":"de382392-8ad9-43a6-8651-e14708de4c73","Type":"ContainerDied","Data":"7f48869decdfec72847fb7350ef5f80c21caa3f495711610b108378254855ada"} Feb 16 10:51:22 crc kubenswrapper[4814]: I0216 10:51:22.907673 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpvbq" event={"ID":"de382392-8ad9-43a6-8651-e14708de4c73","Type":"ContainerStarted","Data":"96c0b23f8d432dc1aefcfaa02ce28105e49a41599de10aa6e8ef2630ac5e44b6"} Feb 16 10:51:23 crc kubenswrapper[4814]: I0216 10:51:23.917795 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpvbq" event={"ID":"de382392-8ad9-43a6-8651-e14708de4c73","Type":"ContainerStarted","Data":"3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c"} Feb 16 10:51:23 crc kubenswrapper[4814]: I0216 10:51:23.993292 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:51:23 crc kubenswrapper[4814]: E0216 10:51:23.993609 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:51:25 crc kubenswrapper[4814]: I0216 10:51:25.943736 4814 generic.go:334] "Generic (PLEG): container finished" podID="de382392-8ad9-43a6-8651-e14708de4c73" containerID="3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c" exitCode=0 Feb 16 10:51:25 crc kubenswrapper[4814]: I0216 10:51:25.943789 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpvbq" event={"ID":"de382392-8ad9-43a6-8651-e14708de4c73","Type":"ContainerDied","Data":"3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c"} Feb 16 10:51:26 crc kubenswrapper[4814]: I0216 10:51:26.959148 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpvbq" event={"ID":"de382392-8ad9-43a6-8651-e14708de4c73","Type":"ContainerStarted","Data":"238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795"} Feb 16 10:51:26 crc kubenswrapper[4814]: I0216 10:51:26.981714 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mpvbq" podStartSLOduration=2.464405575 podStartE2EDuration="5.981694066s" podCreationTimestamp="2026-02-16 10:51:21 +0000 UTC" firstStartedPulling="2026-02-16 10:51:22.910959614 +0000 UTC m=+3940.604115804" lastFinishedPulling="2026-02-16 10:51:26.428248075 +0000 UTC m=+3944.121404295" observedRunningTime="2026-02-16 10:51:26.977394859 +0000 UTC m=+3944.670551039" watchObservedRunningTime="2026-02-16 10:51:26.981694066 +0000 UTC m=+3944.674850246" Feb 16 10:51:31 crc kubenswrapper[4814]: I0216 10:51:31.398110 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:31 crc kubenswrapper[4814]: I0216 10:51:31.398868 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:32 crc kubenswrapper[4814]: I0216 10:51:32.442520 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mpvbq" podUID="de382392-8ad9-43a6-8651-e14708de4c73" containerName="registry-server" probeResult="failure" output=< Feb 16 10:51:32 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 10:51:32 crc kubenswrapper[4814]: > Feb 16 10:51:35 crc kubenswrapper[4814]: I0216 10:51:35.993670 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:51:35 crc kubenswrapper[4814]: E0216 10:51:35.994713 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:51:38 crc kubenswrapper[4814]: I0216 10:51:38.994656 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:51:38 crc kubenswrapper[4814]: E0216 10:51:38.997002 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:51:41 crc kubenswrapper[4814]: I0216 10:51:41.452771 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:41 crc kubenswrapper[4814]: I0216 10:51:41.507611 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:41 crc kubenswrapper[4814]: I0216 10:51:41.697083 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mpvbq"] Feb 16 10:51:43 crc kubenswrapper[4814]: I0216 10:51:43.125807 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mpvbq" podUID="de382392-8ad9-43a6-8651-e14708de4c73" containerName="registry-server" containerID="cri-o://238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795" gracePeriod=2 Feb 16 10:51:43 crc kubenswrapper[4814]: I0216 10:51:43.635724 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:43 crc kubenswrapper[4814]: I0216 10:51:43.732723 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt9ct\" (UniqueName: \"kubernetes.io/projected/de382392-8ad9-43a6-8651-e14708de4c73-kube-api-access-wt9ct\") pod \"de382392-8ad9-43a6-8651-e14708de4c73\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " Feb 16 10:51:43 crc kubenswrapper[4814]: I0216 10:51:43.732867 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-catalog-content\") pod \"de382392-8ad9-43a6-8651-e14708de4c73\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " Feb 16 10:51:43 crc kubenswrapper[4814]: I0216 10:51:43.732922 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-utilities\") pod \"de382392-8ad9-43a6-8651-e14708de4c73\" (UID: \"de382392-8ad9-43a6-8651-e14708de4c73\") " Feb 16 10:51:43 crc kubenswrapper[4814]: I0216 10:51:43.733897 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-utilities" (OuterVolumeSpecName: "utilities") pod "de382392-8ad9-43a6-8651-e14708de4c73" (UID: "de382392-8ad9-43a6-8651-e14708de4c73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:51:43 crc kubenswrapper[4814]: I0216 10:51:43.740260 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de382392-8ad9-43a6-8651-e14708de4c73-kube-api-access-wt9ct" (OuterVolumeSpecName: "kube-api-access-wt9ct") pod "de382392-8ad9-43a6-8651-e14708de4c73" (UID: "de382392-8ad9-43a6-8651-e14708de4c73"). InnerVolumeSpecName "kube-api-access-wt9ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:51:43 crc kubenswrapper[4814]: I0216 10:51:43.836549 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt9ct\" (UniqueName: \"kubernetes.io/projected/de382392-8ad9-43a6-8651-e14708de4c73-kube-api-access-wt9ct\") on node \"crc\" DevicePath \"\"" Feb 16 10:51:43 crc kubenswrapper[4814]: I0216 10:51:43.836596 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:51:43 crc kubenswrapper[4814]: I0216 10:51:43.869205 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de382392-8ad9-43a6-8651-e14708de4c73" (UID: "de382392-8ad9-43a6-8651-e14708de4c73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:51:43 crc kubenswrapper[4814]: I0216 10:51:43.938169 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de382392-8ad9-43a6-8651-e14708de4c73-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.136642 4814 generic.go:334] "Generic (PLEG): container finished" podID="de382392-8ad9-43a6-8651-e14708de4c73" containerID="238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795" exitCode=0 Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.136704 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpvbq" event={"ID":"de382392-8ad9-43a6-8651-e14708de4c73","Type":"ContainerDied","Data":"238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795"} Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.136735 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mpvbq" Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.136753 4814 scope.go:117] "RemoveContainer" containerID="238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795" Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.136738 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mpvbq" event={"ID":"de382392-8ad9-43a6-8651-e14708de4c73","Type":"ContainerDied","Data":"96c0b23f8d432dc1aefcfaa02ce28105e49a41599de10aa6e8ef2630ac5e44b6"} Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.158198 4814 scope.go:117] "RemoveContainer" containerID="3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c" Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.178815 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mpvbq"] Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.190000 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mpvbq"] Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.197365 4814 scope.go:117] "RemoveContainer" containerID="7f48869decdfec72847fb7350ef5f80c21caa3f495711610b108378254855ada" Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.233100 4814 scope.go:117] "RemoveContainer" containerID="238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795" Feb 16 10:51:44 crc kubenswrapper[4814]: E0216 10:51:44.233657 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795\": container with ID starting with 238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795 not found: ID does not exist" containerID="238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795" Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.233722 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795"} err="failed to get container status \"238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795\": rpc error: code = NotFound desc = could not find container \"238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795\": container with ID starting with 238ae1e4a3ac04aab2500cecdf759affc53297855c6bd9f62aab5dea41a2c795 not found: ID does not exist" Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.233762 4814 scope.go:117] "RemoveContainer" containerID="3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c" Feb 16 10:51:44 crc kubenswrapper[4814]: E0216 10:51:44.234235 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c\": container with ID starting with 3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c not found: ID does not exist" containerID="3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c" Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.234284 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c"} err="failed to get container status \"3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c\": rpc error: code = NotFound desc = could not find container \"3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c\": container with ID starting with 3819b4c8d7d3c82c6858caa892c8fb2d8d8d1207e45354a035f8d93bc22daf0c not found: ID does not exist" Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.234319 4814 scope.go:117] "RemoveContainer" containerID="7f48869decdfec72847fb7350ef5f80c21caa3f495711610b108378254855ada" Feb 16 10:51:44 crc kubenswrapper[4814]: E0216 10:51:44.234717 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f48869decdfec72847fb7350ef5f80c21caa3f495711610b108378254855ada\": container with ID starting with 7f48869decdfec72847fb7350ef5f80c21caa3f495711610b108378254855ada not found: ID does not exist" containerID="7f48869decdfec72847fb7350ef5f80c21caa3f495711610b108378254855ada" Feb 16 10:51:44 crc kubenswrapper[4814]: I0216 10:51:44.234752 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f48869decdfec72847fb7350ef5f80c21caa3f495711610b108378254855ada"} err="failed to get container status \"7f48869decdfec72847fb7350ef5f80c21caa3f495711610b108378254855ada\": rpc error: code = NotFound desc = could not find container \"7f48869decdfec72847fb7350ef5f80c21caa3f495711610b108378254855ada\": container with ID starting with 7f48869decdfec72847fb7350ef5f80c21caa3f495711610b108378254855ada not found: ID does not exist" Feb 16 10:51:45 crc kubenswrapper[4814]: I0216 10:51:45.007488 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de382392-8ad9-43a6-8651-e14708de4c73" path="/var/lib/kubelet/pods/de382392-8ad9-43a6-8651-e14708de4c73/volumes" Feb 16 10:51:47 crc kubenswrapper[4814]: I0216 10:51:47.993594 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:51:47 crc kubenswrapper[4814]: E0216 10:51:47.994607 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:51:51 crc kubenswrapper[4814]: I0216 10:51:51.993939 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:51:51 crc kubenswrapper[4814]: E0216 10:51:51.995096 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:52:00 crc kubenswrapper[4814]: I0216 10:52:00.994413 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:52:00 crc kubenswrapper[4814]: E0216 10:52:00.995266 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:52:04 crc kubenswrapper[4814]: I0216 10:52:04.997253 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:52:04 crc kubenswrapper[4814]: E0216 10:52:04.998193 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:52:13 crc kubenswrapper[4814]: I0216 10:52:12.999649 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:52:13 crc kubenswrapper[4814]: E0216 10:52:13.000448 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:52:15 crc kubenswrapper[4814]: I0216 10:52:15.423309 4814 patch_prober.go:28] interesting pod/route-controller-manager-86f746897-pjd67 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 10:52:15 crc kubenswrapper[4814]: I0216 10:52:15.423756 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" podUID="1165800c-5b80-43be-9264-383f3228dc73" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 10:52:15 crc kubenswrapper[4814]: I0216 10:52:15.424945 4814 patch_prober.go:28] interesting pod/route-controller-manager-86f746897-pjd67 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 10:52:15 crc kubenswrapper[4814]: I0216 10:52:15.424972 4814 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-86f746897-pjd67" podUID="1165800c-5b80-43be-9264-383f3228dc73" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 10:52:16 crc kubenswrapper[4814]: I0216 10:52:16.993501 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:52:16 crc kubenswrapper[4814]: E0216 10:52:16.994526 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:52:24 crc kubenswrapper[4814]: I0216 10:52:24.993799 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:52:24 crc kubenswrapper[4814]: E0216 10:52:24.994716 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:52:29 crc kubenswrapper[4814]: I0216 10:52:29.993981 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:52:29 crc kubenswrapper[4814]: E0216 10:52:29.994975 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:52:38 crc kubenswrapper[4814]: I0216 10:52:38.994302 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:52:38 crc kubenswrapper[4814]: E0216 10:52:38.995938 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:52:40 crc kubenswrapper[4814]: I0216 10:52:40.993721 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:52:40 crc kubenswrapper[4814]: E0216 10:52:40.994265 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:52:49 crc kubenswrapper[4814]: I0216 10:52:49.994013 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:52:49 crc kubenswrapper[4814]: E0216 10:52:49.995189 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:52:53 crc kubenswrapper[4814]: I0216 10:52:53.993784 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:52:53 crc kubenswrapper[4814]: E0216 10:52:53.994299 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:53:00 crc kubenswrapper[4814]: I0216 10:53:00.994134 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:53:00 crc kubenswrapper[4814]: E0216 10:53:00.995362 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:53:08 crc kubenswrapper[4814]: I0216 10:53:08.994445 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:53:08 crc kubenswrapper[4814]: E0216 10:53:08.995711 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:53:15 crc kubenswrapper[4814]: I0216 10:53:15.997861 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:53:16 crc kubenswrapper[4814]: E0216 10:53:15.999216 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:53:20 crc kubenswrapper[4814]: I0216 10:53:20.994307 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:53:20 crc kubenswrapper[4814]: E0216 10:53:20.996713 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:53:28 crc kubenswrapper[4814]: I0216 10:53:28.994307 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:53:28 crc kubenswrapper[4814]: E0216 10:53:28.995686 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:53:31 crc kubenswrapper[4814]: I0216 10:53:31.994599 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:53:31 crc kubenswrapper[4814]: E0216 10:53:31.995277 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:53:43 crc kubenswrapper[4814]: I0216 10:53:43.036332 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:53:43 crc kubenswrapper[4814]: E0216 10:53:43.038865 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:53:44 crc kubenswrapper[4814]: I0216 10:53:44.994131 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:53:44 crc kubenswrapper[4814]: E0216 10:53:44.994481 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:53:57 crc kubenswrapper[4814]: I0216 10:53:57.993968 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:53:57 crc kubenswrapper[4814]: E0216 10:53:57.995085 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:53:59 crc kubenswrapper[4814]: I0216 10:53:59.994646 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:53:59 crc kubenswrapper[4814]: E0216 10:53:59.995456 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:54:11 crc kubenswrapper[4814]: I0216 10:54:11.993891 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:54:11 crc kubenswrapper[4814]: E0216 10:54:11.997080 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:54:13 crc kubenswrapper[4814]: I0216 10:54:13.004722 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:54:13 crc kubenswrapper[4814]: E0216 10:54:13.005332 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:54:23 crc kubenswrapper[4814]: I0216 10:54:23.994116 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:54:23 crc kubenswrapper[4814]: E0216 10:54:23.995117 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:54:26 crc kubenswrapper[4814]: I0216 10:54:26.995350 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:54:26 crc kubenswrapper[4814]: E0216 10:54:26.996202 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 10:54:38 crc kubenswrapper[4814]: I0216 10:54:38.993462 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:54:38 crc kubenswrapper[4814]: E0216 10:54:38.994232 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:54:40 crc kubenswrapper[4814]: I0216 10:54:40.993976 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:54:41 crc kubenswrapper[4814]: I0216 10:54:41.877670 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"ca6af908f4ede7a51a1c4736cf4e92695cc060a6d447854792a4408a02c959c5"} Feb 16 10:54:49 crc kubenswrapper[4814]: I0216 10:54:49.995380 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:54:49 crc kubenswrapper[4814]: E0216 10:54:49.998178 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:55:01 crc kubenswrapper[4814]: I0216 10:55:01.994910 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:55:01 crc kubenswrapper[4814]: E0216 10:55:01.996671 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:55:14 crc kubenswrapper[4814]: I0216 10:55:14.993335 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:55:14 crc kubenswrapper[4814]: E0216 10:55:14.994896 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:55:27 crc kubenswrapper[4814]: I0216 10:55:27.994506 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:55:27 crc kubenswrapper[4814]: E0216 10:55:27.995240 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.103134 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mk79t"] Feb 16 10:55:28 crc kubenswrapper[4814]: E0216 10:55:28.103558 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de382392-8ad9-43a6-8651-e14708de4c73" containerName="extract-utilities" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.103573 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="de382392-8ad9-43a6-8651-e14708de4c73" containerName="extract-utilities" Feb 16 10:55:28 crc kubenswrapper[4814]: E0216 10:55:28.103597 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de382392-8ad9-43a6-8651-e14708de4c73" containerName="extract-content" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.103603 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="de382392-8ad9-43a6-8651-e14708de4c73" containerName="extract-content" Feb 16 10:55:28 crc kubenswrapper[4814]: E0216 10:55:28.103622 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de382392-8ad9-43a6-8651-e14708de4c73" containerName="registry-server" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.103628 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="de382392-8ad9-43a6-8651-e14708de4c73" containerName="registry-server" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.103816 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="de382392-8ad9-43a6-8651-e14708de4c73" containerName="registry-server" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.105125 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.170770 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mk79t"] Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.224298 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-utilities\") pod \"redhat-marketplace-mk79t\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.224729 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7bcf\" (UniqueName: \"kubernetes.io/projected/832bca86-0293-4b8c-9b1a-d58d298a58c3-kube-api-access-c7bcf\") pod \"redhat-marketplace-mk79t\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.224827 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-catalog-content\") pod \"redhat-marketplace-mk79t\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.326456 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7bcf\" (UniqueName: \"kubernetes.io/projected/832bca86-0293-4b8c-9b1a-d58d298a58c3-kube-api-access-c7bcf\") pod \"redhat-marketplace-mk79t\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.326600 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-catalog-content\") pod \"redhat-marketplace-mk79t\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.326751 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-utilities\") pod \"redhat-marketplace-mk79t\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.327304 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-catalog-content\") pod \"redhat-marketplace-mk79t\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.327348 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-utilities\") pod \"redhat-marketplace-mk79t\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.350472 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7bcf\" (UniqueName: \"kubernetes.io/projected/832bca86-0293-4b8c-9b1a-d58d298a58c3-kube-api-access-c7bcf\") pod \"redhat-marketplace-mk79t\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.433275 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:28 crc kubenswrapper[4814]: I0216 10:55:28.935221 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mk79t"] Feb 16 10:55:29 crc kubenswrapper[4814]: I0216 10:55:29.349215 4814 generic.go:334] "Generic (PLEG): container finished" podID="832bca86-0293-4b8c-9b1a-d58d298a58c3" containerID="e224617f55aaaf2480b4ac20e4572b489ffd76f099a48296d2ea18143e6ffa76" exitCode=0 Feb 16 10:55:29 crc kubenswrapper[4814]: I0216 10:55:29.349449 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk79t" event={"ID":"832bca86-0293-4b8c-9b1a-d58d298a58c3","Type":"ContainerDied","Data":"e224617f55aaaf2480b4ac20e4572b489ffd76f099a48296d2ea18143e6ffa76"} Feb 16 10:55:29 crc kubenswrapper[4814]: I0216 10:55:29.349500 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk79t" event={"ID":"832bca86-0293-4b8c-9b1a-d58d298a58c3","Type":"ContainerStarted","Data":"ce51893245d4c108b85f3f4ddb08df6176c1fbbc04c7b1bbf9a5099fddb31c06"} Feb 16 10:55:29 crc kubenswrapper[4814]: I0216 10:55:29.351119 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 10:55:31 crc kubenswrapper[4814]: I0216 10:55:31.379762 4814 generic.go:334] "Generic (PLEG): container finished" podID="832bca86-0293-4b8c-9b1a-d58d298a58c3" containerID="5399ad1a0ce7cc2dc0adad5998ec0081ca0419a20f5f0e6a5c984ba293dec02d" exitCode=0 Feb 16 10:55:31 crc kubenswrapper[4814]: I0216 10:55:31.379867 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk79t" event={"ID":"832bca86-0293-4b8c-9b1a-d58d298a58c3","Type":"ContainerDied","Data":"5399ad1a0ce7cc2dc0adad5998ec0081ca0419a20f5f0e6a5c984ba293dec02d"} Feb 16 10:55:32 crc kubenswrapper[4814]: I0216 10:55:32.389938 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk79t" event={"ID":"832bca86-0293-4b8c-9b1a-d58d298a58c3","Type":"ContainerStarted","Data":"34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7"} Feb 16 10:55:32 crc kubenswrapper[4814]: I0216 10:55:32.419340 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mk79t" podStartSLOduration=1.7475765810000001 podStartE2EDuration="4.419324503s" podCreationTimestamp="2026-02-16 10:55:28 +0000 UTC" firstStartedPulling="2026-02-16 10:55:29.350775316 +0000 UTC m=+4187.043931496" lastFinishedPulling="2026-02-16 10:55:32.022523238 +0000 UTC m=+4189.715679418" observedRunningTime="2026-02-16 10:55:32.417744509 +0000 UTC m=+4190.110900689" watchObservedRunningTime="2026-02-16 10:55:32.419324503 +0000 UTC m=+4190.112480683" Feb 16 10:55:38 crc kubenswrapper[4814]: I0216 10:55:38.433929 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:38 crc kubenswrapper[4814]: I0216 10:55:38.434352 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:38 crc kubenswrapper[4814]: I0216 10:55:38.507356 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:39 crc kubenswrapper[4814]: I0216 10:55:39.509071 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:39 crc kubenswrapper[4814]: I0216 10:55:39.570912 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mk79t"] Feb 16 10:55:41 crc kubenswrapper[4814]: I0216 10:55:41.463578 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mk79t" podUID="832bca86-0293-4b8c-9b1a-d58d298a58c3" containerName="registry-server" containerID="cri-o://34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7" gracePeriod=2 Feb 16 10:55:41 crc kubenswrapper[4814]: I0216 10:55:41.913271 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:41 crc kubenswrapper[4814]: I0216 10:55:41.993801 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:55:41 crc kubenswrapper[4814]: E0216 10:55:41.994206 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.026267 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7bcf\" (UniqueName: \"kubernetes.io/projected/832bca86-0293-4b8c-9b1a-d58d298a58c3-kube-api-access-c7bcf\") pod \"832bca86-0293-4b8c-9b1a-d58d298a58c3\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.026369 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-utilities\") pod \"832bca86-0293-4b8c-9b1a-d58d298a58c3\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.026398 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-catalog-content\") pod \"832bca86-0293-4b8c-9b1a-d58d298a58c3\" (UID: \"832bca86-0293-4b8c-9b1a-d58d298a58c3\") " Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.031329 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-utilities" (OuterVolumeSpecName: "utilities") pod "832bca86-0293-4b8c-9b1a-d58d298a58c3" (UID: "832bca86-0293-4b8c-9b1a-d58d298a58c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.050918 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832bca86-0293-4b8c-9b1a-d58d298a58c3-kube-api-access-c7bcf" (OuterVolumeSpecName: "kube-api-access-c7bcf") pod "832bca86-0293-4b8c-9b1a-d58d298a58c3" (UID: "832bca86-0293-4b8c-9b1a-d58d298a58c3"). InnerVolumeSpecName "kube-api-access-c7bcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.079857 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "832bca86-0293-4b8c-9b1a-d58d298a58c3" (UID: "832bca86-0293-4b8c-9b1a-d58d298a58c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.129926 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7bcf\" (UniqueName: \"kubernetes.io/projected/832bca86-0293-4b8c-9b1a-d58d298a58c3-kube-api-access-c7bcf\") on node \"crc\" DevicePath \"\"" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.129958 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.129970 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/832bca86-0293-4b8c-9b1a-d58d298a58c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.476617 4814 generic.go:334] "Generic (PLEG): container finished" podID="832bca86-0293-4b8c-9b1a-d58d298a58c3" containerID="34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7" exitCode=0 Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.476666 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk79t" event={"ID":"832bca86-0293-4b8c-9b1a-d58d298a58c3","Type":"ContainerDied","Data":"34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7"} Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.476723 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mk79t" event={"ID":"832bca86-0293-4b8c-9b1a-d58d298a58c3","Type":"ContainerDied","Data":"ce51893245d4c108b85f3f4ddb08df6176c1fbbc04c7b1bbf9a5099fddb31c06"} Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.476743 4814 scope.go:117] "RemoveContainer" containerID="34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.476676 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mk79t" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.513915 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mk79t"] Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.515932 4814 scope.go:117] "RemoveContainer" containerID="5399ad1a0ce7cc2dc0adad5998ec0081ca0419a20f5f0e6a5c984ba293dec02d" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.525171 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mk79t"] Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.548316 4814 scope.go:117] "RemoveContainer" containerID="e224617f55aaaf2480b4ac20e4572b489ffd76f099a48296d2ea18143e6ffa76" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.593895 4814 scope.go:117] "RemoveContainer" containerID="34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7" Feb 16 10:55:42 crc kubenswrapper[4814]: E0216 10:55:42.594427 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7\": container with ID starting with 34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7 not found: ID does not exist" containerID="34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.594486 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7"} err="failed to get container status \"34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7\": rpc error: code = NotFound desc = could not find container \"34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7\": container with ID starting with 34f7654affcc6670759183e1f5f95c533a7ce7f96261abe4ccddf203e4f4a6b7 not found: ID does not exist" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.594520 4814 scope.go:117] "RemoveContainer" containerID="5399ad1a0ce7cc2dc0adad5998ec0081ca0419a20f5f0e6a5c984ba293dec02d" Feb 16 10:55:42 crc kubenswrapper[4814]: E0216 10:55:42.594954 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5399ad1a0ce7cc2dc0adad5998ec0081ca0419a20f5f0e6a5c984ba293dec02d\": container with ID starting with 5399ad1a0ce7cc2dc0adad5998ec0081ca0419a20f5f0e6a5c984ba293dec02d not found: ID does not exist" containerID="5399ad1a0ce7cc2dc0adad5998ec0081ca0419a20f5f0e6a5c984ba293dec02d" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.594997 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5399ad1a0ce7cc2dc0adad5998ec0081ca0419a20f5f0e6a5c984ba293dec02d"} err="failed to get container status \"5399ad1a0ce7cc2dc0adad5998ec0081ca0419a20f5f0e6a5c984ba293dec02d\": rpc error: code = NotFound desc = could not find container \"5399ad1a0ce7cc2dc0adad5998ec0081ca0419a20f5f0e6a5c984ba293dec02d\": container with ID starting with 5399ad1a0ce7cc2dc0adad5998ec0081ca0419a20f5f0e6a5c984ba293dec02d not found: ID does not exist" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.595024 4814 scope.go:117] "RemoveContainer" containerID="e224617f55aaaf2480b4ac20e4572b489ffd76f099a48296d2ea18143e6ffa76" Feb 16 10:55:42 crc kubenswrapper[4814]: E0216 10:55:42.598195 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e224617f55aaaf2480b4ac20e4572b489ffd76f099a48296d2ea18143e6ffa76\": container with ID starting with e224617f55aaaf2480b4ac20e4572b489ffd76f099a48296d2ea18143e6ffa76 not found: ID does not exist" containerID="e224617f55aaaf2480b4ac20e4572b489ffd76f099a48296d2ea18143e6ffa76" Feb 16 10:55:42 crc kubenswrapper[4814]: I0216 10:55:42.598270 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e224617f55aaaf2480b4ac20e4572b489ffd76f099a48296d2ea18143e6ffa76"} err="failed to get container status \"e224617f55aaaf2480b4ac20e4572b489ffd76f099a48296d2ea18143e6ffa76\": rpc error: code = NotFound desc = could not find container \"e224617f55aaaf2480b4ac20e4572b489ffd76f099a48296d2ea18143e6ffa76\": container with ID starting with e224617f55aaaf2480b4ac20e4572b489ffd76f099a48296d2ea18143e6ffa76 not found: ID does not exist" Feb 16 10:55:43 crc kubenswrapper[4814]: I0216 10:55:43.056050 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832bca86-0293-4b8c-9b1a-d58d298a58c3" path="/var/lib/kubelet/pods/832bca86-0293-4b8c-9b1a-d58d298a58c3/volumes" Feb 16 10:55:53 crc kubenswrapper[4814]: I0216 10:55:53.001776 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:55:54 crc kubenswrapper[4814]: I0216 10:55:54.612015 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf"} Feb 16 10:55:57 crc kubenswrapper[4814]: I0216 10:55:57.647556 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" exitCode=0 Feb 16 10:55:57 crc kubenswrapper[4814]: I0216 10:55:57.647631 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf"} Feb 16 10:55:57 crc kubenswrapper[4814]: I0216 10:55:57.648619 4814 scope.go:117] "RemoveContainer" containerID="dae55f5dd836e74fd13ac920fc82d959c900639a0137af9d4591ef6e1028ee7e" Feb 16 10:55:57 crc kubenswrapper[4814]: I0216 10:55:57.649662 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:55:57 crc kubenswrapper[4814]: E0216 10:55:57.650281 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:55:57 crc kubenswrapper[4814]: I0216 10:55:57.677144 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:55:57 crc kubenswrapper[4814]: I0216 10:55:57.677237 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:55:57 crc kubenswrapper[4814]: I0216 10:55:57.677259 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 10:55:58 crc kubenswrapper[4814]: I0216 10:55:58.659727 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:55:58 crc kubenswrapper[4814]: E0216 10:55:58.660296 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:56:10 crc kubenswrapper[4814]: I0216 10:56:10.993614 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:56:10 crc kubenswrapper[4814]: E0216 10:56:10.994449 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:56:23 crc kubenswrapper[4814]: I0216 10:56:23.993833 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:56:23 crc kubenswrapper[4814]: E0216 10:56:23.994487 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:56:35 crc kubenswrapper[4814]: I0216 10:56:35.993953 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:56:35 crc kubenswrapper[4814]: E0216 10:56:35.994752 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:56:47 crc kubenswrapper[4814]: I0216 10:56:47.994067 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:56:47 crc kubenswrapper[4814]: E0216 10:56:47.995199 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:57:03 crc kubenswrapper[4814]: I0216 10:57:03.004589 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:57:03 crc kubenswrapper[4814]: E0216 10:57:03.007932 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:57:07 crc kubenswrapper[4814]: I0216 10:57:07.960671 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:57:07 crc kubenswrapper[4814]: I0216 10:57:07.961680 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:57:15 crc kubenswrapper[4814]: I0216 10:57:15.993968 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:57:15 crc kubenswrapper[4814]: E0216 10:57:15.995061 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:57:29 crc kubenswrapper[4814]: I0216 10:57:29.995017 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:57:29 crc kubenswrapper[4814]: E0216 10:57:29.996250 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:57:37 crc kubenswrapper[4814]: I0216 10:57:37.960550 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:57:37 crc kubenswrapper[4814]: I0216 10:57:37.961127 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.415171 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bsfr5"] Feb 16 10:57:40 crc kubenswrapper[4814]: E0216 10:57:40.418168 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832bca86-0293-4b8c-9b1a-d58d298a58c3" containerName="extract-content" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.418196 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="832bca86-0293-4b8c-9b1a-d58d298a58c3" containerName="extract-content" Feb 16 10:57:40 crc kubenswrapper[4814]: E0216 10:57:40.418237 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832bca86-0293-4b8c-9b1a-d58d298a58c3" containerName="registry-server" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.418243 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="832bca86-0293-4b8c-9b1a-d58d298a58c3" containerName="registry-server" Feb 16 10:57:40 crc kubenswrapper[4814]: E0216 10:57:40.418265 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832bca86-0293-4b8c-9b1a-d58d298a58c3" containerName="extract-utilities" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.418273 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="832bca86-0293-4b8c-9b1a-d58d298a58c3" containerName="extract-utilities" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.418495 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="832bca86-0293-4b8c-9b1a-d58d298a58c3" containerName="registry-server" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.420101 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.429968 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bsfr5"] Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.487505 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-catalog-content\") pod \"community-operators-bsfr5\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.488066 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-utilities\") pod \"community-operators-bsfr5\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.488524 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7t5g\" (UniqueName: \"kubernetes.io/projected/e453edba-6008-487c-aa6e-b430062a000f-kube-api-access-l7t5g\") pod \"community-operators-bsfr5\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.591331 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-catalog-content\") pod \"community-operators-bsfr5\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.591436 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-utilities\") pod \"community-operators-bsfr5\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.591596 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7t5g\" (UniqueName: \"kubernetes.io/projected/e453edba-6008-487c-aa6e-b430062a000f-kube-api-access-l7t5g\") pod \"community-operators-bsfr5\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.591923 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-catalog-content\") pod \"community-operators-bsfr5\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.591986 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-utilities\") pod \"community-operators-bsfr5\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.611693 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7t5g\" (UniqueName: \"kubernetes.io/projected/e453edba-6008-487c-aa6e-b430062a000f-kube-api-access-l7t5g\") pod \"community-operators-bsfr5\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:40 crc kubenswrapper[4814]: I0216 10:57:40.757782 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:41 crc kubenswrapper[4814]: I0216 10:57:41.290469 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bsfr5"] Feb 16 10:57:41 crc kubenswrapper[4814]: I0216 10:57:41.815963 4814 generic.go:334] "Generic (PLEG): container finished" podID="e453edba-6008-487c-aa6e-b430062a000f" containerID="1fcba4cdf8425d0c2a186074281c5284401ad947d7cbc535e2115b845d0391ed" exitCode=0 Feb 16 10:57:41 crc kubenswrapper[4814]: I0216 10:57:41.816036 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bsfr5" event={"ID":"e453edba-6008-487c-aa6e-b430062a000f","Type":"ContainerDied","Data":"1fcba4cdf8425d0c2a186074281c5284401ad947d7cbc535e2115b845d0391ed"} Feb 16 10:57:41 crc kubenswrapper[4814]: I0216 10:57:41.816302 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bsfr5" event={"ID":"e453edba-6008-487c-aa6e-b430062a000f","Type":"ContainerStarted","Data":"7a4367e2e12909cd51e99200b34fa6bffd635759bc59ce31077ca172098f9506"} Feb 16 10:57:43 crc kubenswrapper[4814]: I0216 10:57:43.000719 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:57:43 crc kubenswrapper[4814]: E0216 10:57:43.002059 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:57:44 crc kubenswrapper[4814]: I0216 10:57:44.855825 4814 generic.go:334] "Generic (PLEG): container finished" podID="e453edba-6008-487c-aa6e-b430062a000f" containerID="4f03cd64f3366390356bcbb791ce0724b3b39dc7612acdffb1bed8f4b58d113e" exitCode=0 Feb 16 10:57:44 crc kubenswrapper[4814]: I0216 10:57:44.856380 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bsfr5" event={"ID":"e453edba-6008-487c-aa6e-b430062a000f","Type":"ContainerDied","Data":"4f03cd64f3366390356bcbb791ce0724b3b39dc7612acdffb1bed8f4b58d113e"} Feb 16 10:57:45 crc kubenswrapper[4814]: I0216 10:57:45.869283 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bsfr5" event={"ID":"e453edba-6008-487c-aa6e-b430062a000f","Type":"ContainerStarted","Data":"15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344"} Feb 16 10:57:45 crc kubenswrapper[4814]: I0216 10:57:45.900469 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bsfr5" podStartSLOduration=2.197808315 podStartE2EDuration="5.900433114s" podCreationTimestamp="2026-02-16 10:57:40 +0000 UTC" firstStartedPulling="2026-02-16 10:57:41.81861165 +0000 UTC m=+4319.511767840" lastFinishedPulling="2026-02-16 10:57:45.521236429 +0000 UTC m=+4323.214392639" observedRunningTime="2026-02-16 10:57:45.891793159 +0000 UTC m=+4323.584949339" watchObservedRunningTime="2026-02-16 10:57:45.900433114 +0000 UTC m=+4323.593589334" Feb 16 10:57:50 crc kubenswrapper[4814]: I0216 10:57:50.758507 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:50 crc kubenswrapper[4814]: I0216 10:57:50.759168 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:50 crc kubenswrapper[4814]: I0216 10:57:50.925069 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:51 crc kubenswrapper[4814]: I0216 10:57:51.013131 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:51 crc kubenswrapper[4814]: I0216 10:57:51.179782 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bsfr5"] Feb 16 10:57:52 crc kubenswrapper[4814]: I0216 10:57:52.946788 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bsfr5" podUID="e453edba-6008-487c-aa6e-b430062a000f" containerName="registry-server" containerID="cri-o://15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344" gracePeriod=2 Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.475349 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.578517 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7t5g\" (UniqueName: \"kubernetes.io/projected/e453edba-6008-487c-aa6e-b430062a000f-kube-api-access-l7t5g\") pod \"e453edba-6008-487c-aa6e-b430062a000f\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.578892 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-catalog-content\") pod \"e453edba-6008-487c-aa6e-b430062a000f\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.579698 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-utilities\") pod \"e453edba-6008-487c-aa6e-b430062a000f\" (UID: \"e453edba-6008-487c-aa6e-b430062a000f\") " Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.580624 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-utilities" (OuterVolumeSpecName: "utilities") pod "e453edba-6008-487c-aa6e-b430062a000f" (UID: "e453edba-6008-487c-aa6e-b430062a000f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.580917 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.583820 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e453edba-6008-487c-aa6e-b430062a000f-kube-api-access-l7t5g" (OuterVolumeSpecName: "kube-api-access-l7t5g") pod "e453edba-6008-487c-aa6e-b430062a000f" (UID: "e453edba-6008-487c-aa6e-b430062a000f"). InnerVolumeSpecName "kube-api-access-l7t5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.635380 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e453edba-6008-487c-aa6e-b430062a000f" (UID: "e453edba-6008-487c-aa6e-b430062a000f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.682560 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e453edba-6008-487c-aa6e-b430062a000f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.682593 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7t5g\" (UniqueName: \"kubernetes.io/projected/e453edba-6008-487c-aa6e-b430062a000f-kube-api-access-l7t5g\") on node \"crc\" DevicePath \"\"" Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.956684 4814 generic.go:334] "Generic (PLEG): container finished" podID="e453edba-6008-487c-aa6e-b430062a000f" containerID="15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344" exitCode=0 Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.956806 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bsfr5" Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.956811 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bsfr5" event={"ID":"e453edba-6008-487c-aa6e-b430062a000f","Type":"ContainerDied","Data":"15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344"} Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.957099 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bsfr5" event={"ID":"e453edba-6008-487c-aa6e-b430062a000f","Type":"ContainerDied","Data":"7a4367e2e12909cd51e99200b34fa6bffd635759bc59ce31077ca172098f9506"} Feb 16 10:57:53 crc kubenswrapper[4814]: I0216 10:57:53.957133 4814 scope.go:117] "RemoveContainer" containerID="15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344" Feb 16 10:57:54 crc kubenswrapper[4814]: I0216 10:57:54.003654 4814 scope.go:117] "RemoveContainer" containerID="4f03cd64f3366390356bcbb791ce0724b3b39dc7612acdffb1bed8f4b58d113e" Feb 16 10:57:54 crc kubenswrapper[4814]: I0216 10:57:54.005238 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bsfr5"] Feb 16 10:57:54 crc kubenswrapper[4814]: I0216 10:57:54.015664 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bsfr5"] Feb 16 10:57:54 crc kubenswrapper[4814]: I0216 10:57:54.039395 4814 scope.go:117] "RemoveContainer" containerID="1fcba4cdf8425d0c2a186074281c5284401ad947d7cbc535e2115b845d0391ed" Feb 16 10:57:54 crc kubenswrapper[4814]: I0216 10:57:54.071494 4814 scope.go:117] "RemoveContainer" containerID="15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344" Feb 16 10:57:54 crc kubenswrapper[4814]: E0216 10:57:54.072374 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344\": container with ID starting with 15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344 not found: ID does not exist" containerID="15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344" Feb 16 10:57:54 crc kubenswrapper[4814]: I0216 10:57:54.072441 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344"} err="failed to get container status \"15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344\": rpc error: code = NotFound desc = could not find container \"15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344\": container with ID starting with 15e89009711e0dc8c95a867d06fe54fb98084af795c1aed7adf5a0b00253e344 not found: ID does not exist" Feb 16 10:57:54 crc kubenswrapper[4814]: I0216 10:57:54.072467 4814 scope.go:117] "RemoveContainer" containerID="4f03cd64f3366390356bcbb791ce0724b3b39dc7612acdffb1bed8f4b58d113e" Feb 16 10:57:54 crc kubenswrapper[4814]: E0216 10:57:54.073015 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f03cd64f3366390356bcbb791ce0724b3b39dc7612acdffb1bed8f4b58d113e\": container with ID starting with 4f03cd64f3366390356bcbb791ce0724b3b39dc7612acdffb1bed8f4b58d113e not found: ID does not exist" containerID="4f03cd64f3366390356bcbb791ce0724b3b39dc7612acdffb1bed8f4b58d113e" Feb 16 10:57:54 crc kubenswrapper[4814]: I0216 10:57:54.073051 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f03cd64f3366390356bcbb791ce0724b3b39dc7612acdffb1bed8f4b58d113e"} err="failed to get container status \"4f03cd64f3366390356bcbb791ce0724b3b39dc7612acdffb1bed8f4b58d113e\": rpc error: code = NotFound desc = could not find container \"4f03cd64f3366390356bcbb791ce0724b3b39dc7612acdffb1bed8f4b58d113e\": container with ID starting with 4f03cd64f3366390356bcbb791ce0724b3b39dc7612acdffb1bed8f4b58d113e not found: ID does not exist" Feb 16 10:57:54 crc kubenswrapper[4814]: I0216 10:57:54.073079 4814 scope.go:117] "RemoveContainer" containerID="1fcba4cdf8425d0c2a186074281c5284401ad947d7cbc535e2115b845d0391ed" Feb 16 10:57:54 crc kubenswrapper[4814]: E0216 10:57:54.073675 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fcba4cdf8425d0c2a186074281c5284401ad947d7cbc535e2115b845d0391ed\": container with ID starting with 1fcba4cdf8425d0c2a186074281c5284401ad947d7cbc535e2115b845d0391ed not found: ID does not exist" containerID="1fcba4cdf8425d0c2a186074281c5284401ad947d7cbc535e2115b845d0391ed" Feb 16 10:57:54 crc kubenswrapper[4814]: I0216 10:57:54.073735 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fcba4cdf8425d0c2a186074281c5284401ad947d7cbc535e2115b845d0391ed"} err="failed to get container status \"1fcba4cdf8425d0c2a186074281c5284401ad947d7cbc535e2115b845d0391ed\": rpc error: code = NotFound desc = could not find container \"1fcba4cdf8425d0c2a186074281c5284401ad947d7cbc535e2115b845d0391ed\": container with ID starting with 1fcba4cdf8425d0c2a186074281c5284401ad947d7cbc535e2115b845d0391ed not found: ID does not exist" Feb 16 10:57:55 crc kubenswrapper[4814]: I0216 10:57:55.009214 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e453edba-6008-487c-aa6e-b430062a000f" path="/var/lib/kubelet/pods/e453edba-6008-487c-aa6e-b430062a000f/volumes" Feb 16 10:57:55 crc kubenswrapper[4814]: I0216 10:57:55.994998 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:57:55 crc kubenswrapper[4814]: E0216 10:57:55.996141 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:58:06 crc kubenswrapper[4814]: I0216 10:58:06.995344 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:58:06 crc kubenswrapper[4814]: E0216 10:58:06.996485 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:58:07 crc kubenswrapper[4814]: I0216 10:58:07.960832 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 10:58:07 crc kubenswrapper[4814]: I0216 10:58:07.960922 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 10:58:07 crc kubenswrapper[4814]: I0216 10:58:07.961000 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 10:58:07 crc kubenswrapper[4814]: I0216 10:58:07.962420 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca6af908f4ede7a51a1c4736cf4e92695cc060a6d447854792a4408a02c959c5"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 10:58:07 crc kubenswrapper[4814]: I0216 10:58:07.962586 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://ca6af908f4ede7a51a1c4736cf4e92695cc060a6d447854792a4408a02c959c5" gracePeriod=600 Feb 16 10:58:08 crc kubenswrapper[4814]: I0216 10:58:08.137247 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="ca6af908f4ede7a51a1c4736cf4e92695cc060a6d447854792a4408a02c959c5" exitCode=0 Feb 16 10:58:08 crc kubenswrapper[4814]: I0216 10:58:08.137469 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"ca6af908f4ede7a51a1c4736cf4e92695cc060a6d447854792a4408a02c959c5"} Feb 16 10:58:08 crc kubenswrapper[4814]: I0216 10:58:08.137515 4814 scope.go:117] "RemoveContainer" containerID="78b80f88e1346790db8ec148a92000bf2fed9627d1368d9abde2eca73a94fe27" Feb 16 10:58:09 crc kubenswrapper[4814]: I0216 10:58:09.149965 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825"} Feb 16 10:58:21 crc kubenswrapper[4814]: I0216 10:58:21.993822 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:58:21 crc kubenswrapper[4814]: E0216 10:58:21.994636 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:58:36 crc kubenswrapper[4814]: I0216 10:58:36.996065 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:58:36 crc kubenswrapper[4814]: E0216 10:58:36.997156 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:58:47 crc kubenswrapper[4814]: I0216 10:58:47.994850 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:58:47 crc kubenswrapper[4814]: E0216 10:58:47.995998 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:59:03 crc kubenswrapper[4814]: I0216 10:59:03.004734 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:59:03 crc kubenswrapper[4814]: E0216 10:59:03.005778 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:59:13 crc kubenswrapper[4814]: I0216 10:59:13.994257 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:59:13 crc kubenswrapper[4814]: E0216 10:59:13.995746 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:59:25 crc kubenswrapper[4814]: I0216 10:59:25.994415 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:59:25 crc kubenswrapper[4814]: E0216 10:59:25.995493 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:59:37 crc kubenswrapper[4814]: I0216 10:59:37.994681 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:59:37 crc kubenswrapper[4814]: E0216 10:59:37.995969 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 10:59:53 crc kubenswrapper[4814]: I0216 10:59:53.000933 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 10:59:53 crc kubenswrapper[4814]: E0216 10:59:53.001961 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.179451 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59"] Feb 16 11:00:00 crc kubenswrapper[4814]: E0216 11:00:00.180438 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e453edba-6008-487c-aa6e-b430062a000f" containerName="extract-content" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.180458 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e453edba-6008-487c-aa6e-b430062a000f" containerName="extract-content" Feb 16 11:00:00 crc kubenswrapper[4814]: E0216 11:00:00.180475 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e453edba-6008-487c-aa6e-b430062a000f" containerName="extract-utilities" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.180484 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e453edba-6008-487c-aa6e-b430062a000f" containerName="extract-utilities" Feb 16 11:00:00 crc kubenswrapper[4814]: E0216 11:00:00.180554 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e453edba-6008-487c-aa6e-b430062a000f" containerName="registry-server" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.180564 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="e453edba-6008-487c-aa6e-b430062a000f" containerName="registry-server" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.180814 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="e453edba-6008-487c-aa6e-b430062a000f" containerName="registry-server" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.181616 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.183954 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.190016 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59"] Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.196246 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.312059 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gpz7\" (UniqueName: \"kubernetes.io/projected/4592526d-7e72-4165-8661-9315721c6eac-kube-api-access-2gpz7\") pod \"collect-profiles-29520660-vpp59\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.312154 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4592526d-7e72-4165-8661-9315721c6eac-secret-volume\") pod \"collect-profiles-29520660-vpp59\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.312351 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4592526d-7e72-4165-8661-9315721c6eac-config-volume\") pod \"collect-profiles-29520660-vpp59\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.414088 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gpz7\" (UniqueName: \"kubernetes.io/projected/4592526d-7e72-4165-8661-9315721c6eac-kube-api-access-2gpz7\") pod \"collect-profiles-29520660-vpp59\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.414166 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4592526d-7e72-4165-8661-9315721c6eac-secret-volume\") pod \"collect-profiles-29520660-vpp59\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.414302 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4592526d-7e72-4165-8661-9315721c6eac-config-volume\") pod \"collect-profiles-29520660-vpp59\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.415351 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4592526d-7e72-4165-8661-9315721c6eac-config-volume\") pod \"collect-profiles-29520660-vpp59\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.423912 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4592526d-7e72-4165-8661-9315721c6eac-secret-volume\") pod \"collect-profiles-29520660-vpp59\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.435228 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gpz7\" (UniqueName: \"kubernetes.io/projected/4592526d-7e72-4165-8661-9315721c6eac-kube-api-access-2gpz7\") pod \"collect-profiles-29520660-vpp59\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:00 crc kubenswrapper[4814]: I0216 11:00:00.499988 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:01 crc kubenswrapper[4814]: I0216 11:00:01.025171 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59"] Feb 16 11:00:01 crc kubenswrapper[4814]: I0216 11:00:01.330361 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" event={"ID":"4592526d-7e72-4165-8661-9315721c6eac","Type":"ContainerStarted","Data":"bec03661120afc4e96421225ce43d2659848db88923db21334d8105d739bcbb3"} Feb 16 11:00:01 crc kubenswrapper[4814]: I0216 11:00:01.330772 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" event={"ID":"4592526d-7e72-4165-8661-9315721c6eac","Type":"ContainerStarted","Data":"80585b205377a76675456e40b5475639b72dbabbdf96c0e4848b1c8e0d461d4b"} Feb 16 11:00:01 crc kubenswrapper[4814]: I0216 11:00:01.354375 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" podStartSLOduration=1.354348243 podStartE2EDuration="1.354348243s" podCreationTimestamp="2026-02-16 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:00:01.349979234 +0000 UTC m=+4459.043135424" watchObservedRunningTime="2026-02-16 11:00:01.354348243 +0000 UTC m=+4459.047504443" Feb 16 11:00:02 crc kubenswrapper[4814]: I0216 11:00:02.351096 4814 generic.go:334] "Generic (PLEG): container finished" podID="4592526d-7e72-4165-8661-9315721c6eac" containerID="bec03661120afc4e96421225ce43d2659848db88923db21334d8105d739bcbb3" exitCode=0 Feb 16 11:00:02 crc kubenswrapper[4814]: I0216 11:00:02.351365 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" event={"ID":"4592526d-7e72-4165-8661-9315721c6eac","Type":"ContainerDied","Data":"bec03661120afc4e96421225ce43d2659848db88923db21334d8105d739bcbb3"} Feb 16 11:00:03 crc kubenswrapper[4814]: I0216 11:00:03.734201 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:03 crc kubenswrapper[4814]: I0216 11:00:03.790218 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4592526d-7e72-4165-8661-9315721c6eac-config-volume\") pod \"4592526d-7e72-4165-8661-9315721c6eac\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " Feb 16 11:00:03 crc kubenswrapper[4814]: I0216 11:00:03.790366 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gpz7\" (UniqueName: \"kubernetes.io/projected/4592526d-7e72-4165-8661-9315721c6eac-kube-api-access-2gpz7\") pod \"4592526d-7e72-4165-8661-9315721c6eac\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " Feb 16 11:00:03 crc kubenswrapper[4814]: I0216 11:00:03.790532 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4592526d-7e72-4165-8661-9315721c6eac-secret-volume\") pod \"4592526d-7e72-4165-8661-9315721c6eac\" (UID: \"4592526d-7e72-4165-8661-9315721c6eac\") " Feb 16 11:00:03 crc kubenswrapper[4814]: I0216 11:00:03.791750 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4592526d-7e72-4165-8661-9315721c6eac-config-volume" (OuterVolumeSpecName: "config-volume") pod "4592526d-7e72-4165-8661-9315721c6eac" (UID: "4592526d-7e72-4165-8661-9315721c6eac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:00:03 crc kubenswrapper[4814]: I0216 11:00:03.795827 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4592526d-7e72-4165-8661-9315721c6eac-kube-api-access-2gpz7" (OuterVolumeSpecName: "kube-api-access-2gpz7") pod "4592526d-7e72-4165-8661-9315721c6eac" (UID: "4592526d-7e72-4165-8661-9315721c6eac"). InnerVolumeSpecName "kube-api-access-2gpz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:00:03 crc kubenswrapper[4814]: I0216 11:00:03.801779 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4592526d-7e72-4165-8661-9315721c6eac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4592526d-7e72-4165-8661-9315721c6eac" (UID: "4592526d-7e72-4165-8661-9315721c6eac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:00:03 crc kubenswrapper[4814]: I0216 11:00:03.892841 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4592526d-7e72-4165-8661-9315721c6eac-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:00:03 crc kubenswrapper[4814]: I0216 11:00:03.892874 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4592526d-7e72-4165-8661-9315721c6eac-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:00:03 crc kubenswrapper[4814]: I0216 11:00:03.892883 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gpz7\" (UniqueName: \"kubernetes.io/projected/4592526d-7e72-4165-8661-9315721c6eac-kube-api-access-2gpz7\") on node \"crc\" DevicePath \"\"" Feb 16 11:00:04 crc kubenswrapper[4814]: I0216 11:00:04.374294 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" event={"ID":"4592526d-7e72-4165-8661-9315721c6eac","Type":"ContainerDied","Data":"80585b205377a76675456e40b5475639b72dbabbdf96c0e4848b1c8e0d461d4b"} Feb 16 11:00:04 crc kubenswrapper[4814]: I0216 11:00:04.374773 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80585b205377a76675456e40b5475639b72dbabbdf96c0e4848b1c8e0d461d4b" Feb 16 11:00:04 crc kubenswrapper[4814]: I0216 11:00:04.374338 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-vpp59" Feb 16 11:00:04 crc kubenswrapper[4814]: I0216 11:00:04.459117 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm"] Feb 16 11:00:04 crc kubenswrapper[4814]: I0216 11:00:04.467811 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520615-8rjlm"] Feb 16 11:00:04 crc kubenswrapper[4814]: I0216 11:00:04.994441 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 11:00:04 crc kubenswrapper[4814]: E0216 11:00:04.995143 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:00:05 crc kubenswrapper[4814]: I0216 11:00:05.014887 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93acc19d-fd99-485c-98ca-21f065258a67" path="/var/lib/kubelet/pods/93acc19d-fd99-485c-98ca-21f065258a67/volumes" Feb 16 11:00:18 crc kubenswrapper[4814]: I0216 11:00:18.994920 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 11:00:18 crc kubenswrapper[4814]: E0216 11:00:18.996662 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:00:31 crc kubenswrapper[4814]: I0216 11:00:31.994505 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 11:00:31 crc kubenswrapper[4814]: E0216 11:00:31.995658 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:00:37 crc kubenswrapper[4814]: I0216 11:00:37.960600 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:00:37 crc kubenswrapper[4814]: I0216 11:00:37.961113 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:00:44 crc kubenswrapper[4814]: I0216 11:00:44.994473 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 11:00:44 crc kubenswrapper[4814]: E0216 11:00:44.995324 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:00:53 crc kubenswrapper[4814]: I0216 11:00:53.596431 4814 scope.go:117] "RemoveContainer" containerID="44f603042edf0ddd7fea68a572297b2beaa64dec53d468a20f1a4c861aadcf32" Feb 16 11:00:58 crc kubenswrapper[4814]: I0216 11:00:58.995478 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 11:00:59 crc kubenswrapper[4814]: I0216 11:00:59.944409 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c"} Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.157137 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29520661-zq9t9"] Feb 16 11:01:00 crc kubenswrapper[4814]: E0216 11:01:00.157765 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4592526d-7e72-4165-8661-9315721c6eac" containerName="collect-profiles" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.157778 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="4592526d-7e72-4165-8661-9315721c6eac" containerName="collect-profiles" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.158033 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="4592526d-7e72-4165-8661-9315721c6eac" containerName="collect-profiles" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.158716 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.166285 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520661-zq9t9"] Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.290452 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-fernet-keys\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.290573 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-combined-ca-bundle\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.290715 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rwg\" (UniqueName: \"kubernetes.io/projected/96f45b84-c126-4555-ad31-189efbc1e60c-kube-api-access-n7rwg\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.290778 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-config-data\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.392512 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-config-data\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.392871 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-fernet-keys\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.392956 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-combined-ca-bundle\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.393020 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7rwg\" (UniqueName: \"kubernetes.io/projected/96f45b84-c126-4555-ad31-189efbc1e60c-kube-api-access-n7rwg\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.424110 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7rwg\" (UniqueName: \"kubernetes.io/projected/96f45b84-c126-4555-ad31-189efbc1e60c-kube-api-access-n7rwg\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.474170 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-fernet-keys\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.484900 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-config-data\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.486274 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-combined-ca-bundle\") pod \"keystone-cron-29520661-zq9t9\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:00 crc kubenswrapper[4814]: I0216 11:01:00.777090 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:01 crc kubenswrapper[4814]: I0216 11:01:01.288870 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520661-zq9t9"] Feb 16 11:01:02 crc kubenswrapper[4814]: I0216 11:01:02.015499 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520661-zq9t9" event={"ID":"96f45b84-c126-4555-ad31-189efbc1e60c","Type":"ContainerStarted","Data":"a74686e4cd4b21d5d02983c69e556f6e05c1b9102ae405509e6aa64c11976146"} Feb 16 11:01:02 crc kubenswrapper[4814]: I0216 11:01:02.015587 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520661-zq9t9" event={"ID":"96f45b84-c126-4555-ad31-189efbc1e60c","Type":"ContainerStarted","Data":"547165e04f052f30645c3470cba42724c2a806b5f7cdcdf862c120879449baec"} Feb 16 11:01:02 crc kubenswrapper[4814]: I0216 11:01:02.049754 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29520661-zq9t9" podStartSLOduration=2.049734936 podStartE2EDuration="2.049734936s" podCreationTimestamp="2026-02-16 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:01:02.043480666 +0000 UTC m=+4519.736636866" watchObservedRunningTime="2026-02-16 11:01:02.049734936 +0000 UTC m=+4519.742891116" Feb 16 11:01:02 crc kubenswrapper[4814]: I0216 11:01:02.677236 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 11:01:03 crc kubenswrapper[4814]: I0216 11:01:03.029629 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" exitCode=0 Feb 16 11:01:03 crc kubenswrapper[4814]: I0216 11:01:03.029860 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c"} Feb 16 11:01:03 crc kubenswrapper[4814]: I0216 11:01:03.030082 4814 scope.go:117] "RemoveContainer" containerID="d2e6c9bc26babd71b00274d0a5bb43145559d1dcf219437eee83fd87aaab27cf" Feb 16 11:01:03 crc kubenswrapper[4814]: I0216 11:01:03.033835 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:01:03 crc kubenswrapper[4814]: E0216 11:01:03.035624 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:01:03 crc kubenswrapper[4814]: I0216 11:01:03.677641 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 11:01:04 crc kubenswrapper[4814]: I0216 11:01:04.046509 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:01:04 crc kubenswrapper[4814]: E0216 11:01:04.047247 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:01:05 crc kubenswrapper[4814]: I0216 11:01:05.064093 4814 generic.go:334] "Generic (PLEG): container finished" podID="96f45b84-c126-4555-ad31-189efbc1e60c" containerID="a74686e4cd4b21d5d02983c69e556f6e05c1b9102ae405509e6aa64c11976146" exitCode=0 Feb 16 11:01:05 crc kubenswrapper[4814]: I0216 11:01:05.064169 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520661-zq9t9" event={"ID":"96f45b84-c126-4555-ad31-189efbc1e60c","Type":"ContainerDied","Data":"a74686e4cd4b21d5d02983c69e556f6e05c1b9102ae405509e6aa64c11976146"} Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.444622 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.638631 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-combined-ca-bundle\") pod \"96f45b84-c126-4555-ad31-189efbc1e60c\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.638694 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-fernet-keys\") pod \"96f45b84-c126-4555-ad31-189efbc1e60c\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.638826 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-config-data\") pod \"96f45b84-c126-4555-ad31-189efbc1e60c\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.638978 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7rwg\" (UniqueName: \"kubernetes.io/projected/96f45b84-c126-4555-ad31-189efbc1e60c-kube-api-access-n7rwg\") pod \"96f45b84-c126-4555-ad31-189efbc1e60c\" (UID: \"96f45b84-c126-4555-ad31-189efbc1e60c\") " Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.646204 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "96f45b84-c126-4555-ad31-189efbc1e60c" (UID: "96f45b84-c126-4555-ad31-189efbc1e60c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.646667 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f45b84-c126-4555-ad31-189efbc1e60c-kube-api-access-n7rwg" (OuterVolumeSpecName: "kube-api-access-n7rwg") pod "96f45b84-c126-4555-ad31-189efbc1e60c" (UID: "96f45b84-c126-4555-ad31-189efbc1e60c"). InnerVolumeSpecName "kube-api-access-n7rwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.670147 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96f45b84-c126-4555-ad31-189efbc1e60c" (UID: "96f45b84-c126-4555-ad31-189efbc1e60c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.694335 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-config-data" (OuterVolumeSpecName: "config-data") pod "96f45b84-c126-4555-ad31-189efbc1e60c" (UID: "96f45b84-c126-4555-ad31-189efbc1e60c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.741042 4814 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.741077 4814 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.741088 4814 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f45b84-c126-4555-ad31-189efbc1e60c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:01:06 crc kubenswrapper[4814]: I0216 11:01:06.741097 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7rwg\" (UniqueName: \"kubernetes.io/projected/96f45b84-c126-4555-ad31-189efbc1e60c-kube-api-access-n7rwg\") on node \"crc\" DevicePath \"\"" Feb 16 11:01:07 crc kubenswrapper[4814]: I0216 11:01:07.088288 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520661-zq9t9" event={"ID":"96f45b84-c126-4555-ad31-189efbc1e60c","Type":"ContainerDied","Data":"547165e04f052f30645c3470cba42724c2a806b5f7cdcdf862c120879449baec"} Feb 16 11:01:07 crc kubenswrapper[4814]: I0216 11:01:07.088327 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="547165e04f052f30645c3470cba42724c2a806b5f7cdcdf862c120879449baec" Feb 16 11:01:07 crc kubenswrapper[4814]: I0216 11:01:07.088312 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520661-zq9t9" Feb 16 11:01:07 crc kubenswrapper[4814]: I0216 11:01:07.676757 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 11:01:07 crc kubenswrapper[4814]: I0216 11:01:07.677700 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:01:07 crc kubenswrapper[4814]: E0216 11:01:07.677950 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:01:07 crc kubenswrapper[4814]: I0216 11:01:07.960100 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:01:07 crc kubenswrapper[4814]: I0216 11:01:07.960187 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.723809 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cdzq5"] Feb 16 11:01:20 crc kubenswrapper[4814]: E0216 11:01:20.725907 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f45b84-c126-4555-ad31-189efbc1e60c" containerName="keystone-cron" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.725934 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f45b84-c126-4555-ad31-189efbc1e60c" containerName="keystone-cron" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.727001 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f45b84-c126-4555-ad31-189efbc1e60c" containerName="keystone-cron" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.741963 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.789357 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdzq5"] Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.816577 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-catalog-content\") pod \"certified-operators-cdzq5\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.816691 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-utilities\") pod \"certified-operators-cdzq5\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.816745 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpdlv\" (UniqueName: \"kubernetes.io/projected/1aed429b-c00d-4fec-9b08-09316afc908b-kube-api-access-qpdlv\") pod \"certified-operators-cdzq5\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.919350 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-catalog-content\") pod \"certified-operators-cdzq5\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.919502 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-utilities\") pod \"certified-operators-cdzq5\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.919565 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpdlv\" (UniqueName: \"kubernetes.io/projected/1aed429b-c00d-4fec-9b08-09316afc908b-kube-api-access-qpdlv\") pod \"certified-operators-cdzq5\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.920197 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-utilities\") pod \"certified-operators-cdzq5\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.920600 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-catalog-content\") pod \"certified-operators-cdzq5\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:20 crc kubenswrapper[4814]: I0216 11:01:20.942878 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpdlv\" (UniqueName: \"kubernetes.io/projected/1aed429b-c00d-4fec-9b08-09316afc908b-kube-api-access-qpdlv\") pod \"certified-operators-cdzq5\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:21 crc kubenswrapper[4814]: I0216 11:01:21.091248 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:21 crc kubenswrapper[4814]: I0216 11:01:21.586803 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdzq5"] Feb 16 11:01:21 crc kubenswrapper[4814]: I0216 11:01:21.993519 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:01:21 crc kubenswrapper[4814]: E0216 11:01:21.994245 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:01:22 crc kubenswrapper[4814]: I0216 11:01:22.279030 4814 generic.go:334] "Generic (PLEG): container finished" podID="1aed429b-c00d-4fec-9b08-09316afc908b" containerID="5372607a620024074210093437dcf69012cdc56abbeae63a58c6a9786982a8ce" exitCode=0 Feb 16 11:01:22 crc kubenswrapper[4814]: I0216 11:01:22.279132 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdzq5" event={"ID":"1aed429b-c00d-4fec-9b08-09316afc908b","Type":"ContainerDied","Data":"5372607a620024074210093437dcf69012cdc56abbeae63a58c6a9786982a8ce"} Feb 16 11:01:22 crc kubenswrapper[4814]: I0216 11:01:22.279434 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdzq5" event={"ID":"1aed429b-c00d-4fec-9b08-09316afc908b","Type":"ContainerStarted","Data":"88b34b332b6fabdb8571492b1cf51b71fc93bd233ea5bca3cb5a09c0667dcbc4"} Feb 16 11:01:22 crc kubenswrapper[4814]: I0216 11:01:22.283399 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.706271 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-skztl"] Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.709153 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.716413 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skztl"] Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.773887 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twqhk\" (UniqueName: \"kubernetes.io/projected/dd36fcd3-eb61-49ac-860e-252ea832f1c3-kube-api-access-twqhk\") pod \"redhat-operators-skztl\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.773963 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-utilities\") pod \"redhat-operators-skztl\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.774066 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-catalog-content\") pod \"redhat-operators-skztl\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.875410 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-catalog-content\") pod \"redhat-operators-skztl\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.875588 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twqhk\" (UniqueName: \"kubernetes.io/projected/dd36fcd3-eb61-49ac-860e-252ea832f1c3-kube-api-access-twqhk\") pod \"redhat-operators-skztl\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.875655 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-utilities\") pod \"redhat-operators-skztl\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.875996 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-catalog-content\") pod \"redhat-operators-skztl\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.876014 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-utilities\") pod \"redhat-operators-skztl\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:23 crc kubenswrapper[4814]: I0216 11:01:23.899617 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twqhk\" (UniqueName: \"kubernetes.io/projected/dd36fcd3-eb61-49ac-860e-252ea832f1c3-kube-api-access-twqhk\") pod \"redhat-operators-skztl\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:24 crc kubenswrapper[4814]: I0216 11:01:24.042563 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:24 crc kubenswrapper[4814]: I0216 11:01:24.301554 4814 generic.go:334] "Generic (PLEG): container finished" podID="1aed429b-c00d-4fec-9b08-09316afc908b" containerID="ca1bdea9bf262de34a158d302a49042e14c81556884fd3ff80f45d8de83fbf47" exitCode=0 Feb 16 11:01:24 crc kubenswrapper[4814]: I0216 11:01:24.301624 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdzq5" event={"ID":"1aed429b-c00d-4fec-9b08-09316afc908b","Type":"ContainerDied","Data":"ca1bdea9bf262de34a158d302a49042e14c81556884fd3ff80f45d8de83fbf47"} Feb 16 11:01:24 crc kubenswrapper[4814]: I0216 11:01:24.494670 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skztl"] Feb 16 11:01:24 crc kubenswrapper[4814]: W0216 11:01:24.679440 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd36fcd3_eb61_49ac_860e_252ea832f1c3.slice/crio-68fda11628dd12e0fe14de4d3e5a414988985df1b335caffdefa4d1cbe553144 WatchSource:0}: Error finding container 68fda11628dd12e0fe14de4d3e5a414988985df1b335caffdefa4d1cbe553144: Status 404 returned error can't find the container with id 68fda11628dd12e0fe14de4d3e5a414988985df1b335caffdefa4d1cbe553144 Feb 16 11:01:25 crc kubenswrapper[4814]: I0216 11:01:25.311260 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdzq5" event={"ID":"1aed429b-c00d-4fec-9b08-09316afc908b","Type":"ContainerStarted","Data":"59dda2b010e87e55c898053ce276bb7a992f153f5bb8c0fa0951406b1229e495"} Feb 16 11:01:25 crc kubenswrapper[4814]: I0216 11:01:25.313229 4814 generic.go:334] "Generic (PLEG): container finished" podID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerID="8217e5339005d79ee8f22c446a965b930c7b8ec01859cd4e0db110634435f374" exitCode=0 Feb 16 11:01:25 crc kubenswrapper[4814]: I0216 11:01:25.313259 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skztl" event={"ID":"dd36fcd3-eb61-49ac-860e-252ea832f1c3","Type":"ContainerDied","Data":"8217e5339005d79ee8f22c446a965b930c7b8ec01859cd4e0db110634435f374"} Feb 16 11:01:25 crc kubenswrapper[4814]: I0216 11:01:25.313274 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skztl" event={"ID":"dd36fcd3-eb61-49ac-860e-252ea832f1c3","Type":"ContainerStarted","Data":"68fda11628dd12e0fe14de4d3e5a414988985df1b335caffdefa4d1cbe553144"} Feb 16 11:01:25 crc kubenswrapper[4814]: I0216 11:01:25.334398 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cdzq5" podStartSLOduration=2.902010775 podStartE2EDuration="5.334377349s" podCreationTimestamp="2026-02-16 11:01:20 +0000 UTC" firstStartedPulling="2026-02-16 11:01:22.282871091 +0000 UTC m=+4539.976027311" lastFinishedPulling="2026-02-16 11:01:24.715237705 +0000 UTC m=+4542.408393885" observedRunningTime="2026-02-16 11:01:25.329352453 +0000 UTC m=+4543.022508653" watchObservedRunningTime="2026-02-16 11:01:25.334377349 +0000 UTC m=+4543.027533529" Feb 16 11:01:28 crc kubenswrapper[4814]: I0216 11:01:28.347110 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skztl" event={"ID":"dd36fcd3-eb61-49ac-860e-252ea832f1c3","Type":"ContainerStarted","Data":"1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8"} Feb 16 11:01:29 crc kubenswrapper[4814]: I0216 11:01:29.362410 4814 generic.go:334] "Generic (PLEG): container finished" podID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerID="1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8" exitCode=0 Feb 16 11:01:29 crc kubenswrapper[4814]: I0216 11:01:29.362512 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skztl" event={"ID":"dd36fcd3-eb61-49ac-860e-252ea832f1c3","Type":"ContainerDied","Data":"1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8"} Feb 16 11:01:30 crc kubenswrapper[4814]: I0216 11:01:30.381021 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skztl" event={"ID":"dd36fcd3-eb61-49ac-860e-252ea832f1c3","Type":"ContainerStarted","Data":"f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2"} Feb 16 11:01:30 crc kubenswrapper[4814]: I0216 11:01:30.418759 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-skztl" podStartSLOduration=2.961576202 podStartE2EDuration="7.418731229s" podCreationTimestamp="2026-02-16 11:01:23 +0000 UTC" firstStartedPulling="2026-02-16 11:01:25.315680111 +0000 UTC m=+4543.008836291" lastFinishedPulling="2026-02-16 11:01:29.772835098 +0000 UTC m=+4547.465991318" observedRunningTime="2026-02-16 11:01:30.411276796 +0000 UTC m=+4548.104433006" watchObservedRunningTime="2026-02-16 11:01:30.418731229 +0000 UTC m=+4548.111887419" Feb 16 11:01:31 crc kubenswrapper[4814]: I0216 11:01:31.092643 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:31 crc kubenswrapper[4814]: I0216 11:01:31.092732 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:31 crc kubenswrapper[4814]: I0216 11:01:31.182350 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:31 crc kubenswrapper[4814]: I0216 11:01:31.461136 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:33 crc kubenswrapper[4814]: I0216 11:01:33.115172 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdzq5"] Feb 16 11:01:33 crc kubenswrapper[4814]: I0216 11:01:33.407023 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cdzq5" podUID="1aed429b-c00d-4fec-9b08-09316afc908b" containerName="registry-server" containerID="cri-o://59dda2b010e87e55c898053ce276bb7a992f153f5bb8c0fa0951406b1229e495" gracePeriod=2 Feb 16 11:01:33 crc kubenswrapper[4814]: I0216 11:01:33.993313 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:01:33 crc kubenswrapper[4814]: E0216 11:01:33.993830 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:01:34 crc kubenswrapper[4814]: I0216 11:01:34.043152 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:34 crc kubenswrapper[4814]: I0216 11:01:34.043188 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:35 crc kubenswrapper[4814]: I0216 11:01:35.111231 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-skztl" podUID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerName="registry-server" probeResult="failure" output=< Feb 16 11:01:35 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 11:01:35 crc kubenswrapper[4814]: > Feb 16 11:01:35 crc kubenswrapper[4814]: I0216 11:01:35.427653 4814 generic.go:334] "Generic (PLEG): container finished" podID="1aed429b-c00d-4fec-9b08-09316afc908b" containerID="59dda2b010e87e55c898053ce276bb7a992f153f5bb8c0fa0951406b1229e495" exitCode=0 Feb 16 11:01:35 crc kubenswrapper[4814]: I0216 11:01:35.427725 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdzq5" event={"ID":"1aed429b-c00d-4fec-9b08-09316afc908b","Type":"ContainerDied","Data":"59dda2b010e87e55c898053ce276bb7a992f153f5bb8c0fa0951406b1229e495"} Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.440990 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdzq5" event={"ID":"1aed429b-c00d-4fec-9b08-09316afc908b","Type":"ContainerDied","Data":"88b34b332b6fabdb8571492b1cf51b71fc93bd233ea5bca3cb5a09c0667dcbc4"} Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.442382 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88b34b332b6fabdb8571492b1cf51b71fc93bd233ea5bca3cb5a09c0667dcbc4" Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.445756 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.555263 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-catalog-content\") pod \"1aed429b-c00d-4fec-9b08-09316afc908b\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.555355 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpdlv\" (UniqueName: \"kubernetes.io/projected/1aed429b-c00d-4fec-9b08-09316afc908b-kube-api-access-qpdlv\") pod \"1aed429b-c00d-4fec-9b08-09316afc908b\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.555428 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-utilities\") pod \"1aed429b-c00d-4fec-9b08-09316afc908b\" (UID: \"1aed429b-c00d-4fec-9b08-09316afc908b\") " Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.556467 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-utilities" (OuterVolumeSpecName: "utilities") pod "1aed429b-c00d-4fec-9b08-09316afc908b" (UID: "1aed429b-c00d-4fec-9b08-09316afc908b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.561354 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aed429b-c00d-4fec-9b08-09316afc908b-kube-api-access-qpdlv" (OuterVolumeSpecName: "kube-api-access-qpdlv") pod "1aed429b-c00d-4fec-9b08-09316afc908b" (UID: "1aed429b-c00d-4fec-9b08-09316afc908b"). InnerVolumeSpecName "kube-api-access-qpdlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.625720 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1aed429b-c00d-4fec-9b08-09316afc908b" (UID: "1aed429b-c00d-4fec-9b08-09316afc908b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.658253 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.658317 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpdlv\" (UniqueName: \"kubernetes.io/projected/1aed429b-c00d-4fec-9b08-09316afc908b-kube-api-access-qpdlv\") on node \"crc\" DevicePath \"\"" Feb 16 11:01:36 crc kubenswrapper[4814]: I0216 11:01:36.658339 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1aed429b-c00d-4fec-9b08-09316afc908b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:01:37 crc kubenswrapper[4814]: I0216 11:01:37.447831 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdzq5" Feb 16 11:01:37 crc kubenswrapper[4814]: I0216 11:01:37.496634 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdzq5"] Feb 16 11:01:37 crc kubenswrapper[4814]: I0216 11:01:37.506062 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cdzq5"] Feb 16 11:01:37 crc kubenswrapper[4814]: I0216 11:01:37.960727 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:01:37 crc kubenswrapper[4814]: I0216 11:01:37.960834 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:01:37 crc kubenswrapper[4814]: I0216 11:01:37.960926 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 11:01:37 crc kubenswrapper[4814]: I0216 11:01:37.962131 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:01:37 crc kubenswrapper[4814]: I0216 11:01:37.962241 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" gracePeriod=600 Feb 16 11:01:39 crc kubenswrapper[4814]: I0216 11:01:39.005001 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aed429b-c00d-4fec-9b08-09316afc908b" path="/var/lib/kubelet/pods/1aed429b-c00d-4fec-9b08-09316afc908b/volumes" Feb 16 11:01:39 crc kubenswrapper[4814]: E0216 11:01:39.386699 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:01:39 crc kubenswrapper[4814]: I0216 11:01:39.469592 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" exitCode=0 Feb 16 11:01:39 crc kubenswrapper[4814]: I0216 11:01:39.469640 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825"} Feb 16 11:01:39 crc kubenswrapper[4814]: I0216 11:01:39.469689 4814 scope.go:117] "RemoveContainer" containerID="ca6af908f4ede7a51a1c4736cf4e92695cc060a6d447854792a4408a02c959c5" Feb 16 11:01:39 crc kubenswrapper[4814]: I0216 11:01:39.470394 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:01:39 crc kubenswrapper[4814]: E0216 11:01:39.470673 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:01:44 crc kubenswrapper[4814]: I0216 11:01:44.129290 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:44 crc kubenswrapper[4814]: I0216 11:01:44.194484 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:44 crc kubenswrapper[4814]: I0216 11:01:44.370575 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skztl"] Feb 16 11:01:45 crc kubenswrapper[4814]: I0216 11:01:45.546073 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-skztl" podUID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerName="registry-server" containerID="cri-o://f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2" gracePeriod=2 Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.133807 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.185797 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twqhk\" (UniqueName: \"kubernetes.io/projected/dd36fcd3-eb61-49ac-860e-252ea832f1c3-kube-api-access-twqhk\") pod \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.186031 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-utilities\") pod \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.186093 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-catalog-content\") pod \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\" (UID: \"dd36fcd3-eb61-49ac-860e-252ea832f1c3\") " Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.188510 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-utilities" (OuterVolumeSpecName: "utilities") pod "dd36fcd3-eb61-49ac-860e-252ea832f1c3" (UID: "dd36fcd3-eb61-49ac-860e-252ea832f1c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.195283 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd36fcd3-eb61-49ac-860e-252ea832f1c3-kube-api-access-twqhk" (OuterVolumeSpecName: "kube-api-access-twqhk") pod "dd36fcd3-eb61-49ac-860e-252ea832f1c3" (UID: "dd36fcd3-eb61-49ac-860e-252ea832f1c3"). InnerVolumeSpecName "kube-api-access-twqhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.288333 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.288369 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twqhk\" (UniqueName: \"kubernetes.io/projected/dd36fcd3-eb61-49ac-860e-252ea832f1c3-kube-api-access-twqhk\") on node \"crc\" DevicePath \"\"" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.304222 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd36fcd3-eb61-49ac-860e-252ea832f1c3" (UID: "dd36fcd3-eb61-49ac-860e-252ea832f1c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.390450 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd36fcd3-eb61-49ac-860e-252ea832f1c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.562360 4814 generic.go:334] "Generic (PLEG): container finished" podID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerID="f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2" exitCode=0 Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.562420 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skztl" event={"ID":"dd36fcd3-eb61-49ac-860e-252ea832f1c3","Type":"ContainerDied","Data":"f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2"} Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.562458 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skztl" event={"ID":"dd36fcd3-eb61-49ac-860e-252ea832f1c3","Type":"ContainerDied","Data":"68fda11628dd12e0fe14de4d3e5a414988985df1b335caffdefa4d1cbe553144"} Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.562488 4814 scope.go:117] "RemoveContainer" containerID="f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.563513 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skztl" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.589138 4814 scope.go:117] "RemoveContainer" containerID="1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.611283 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skztl"] Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.620348 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-skztl"] Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.622257 4814 scope.go:117] "RemoveContainer" containerID="8217e5339005d79ee8f22c446a965b930c7b8ec01859cd4e0db110634435f374" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.657397 4814 scope.go:117] "RemoveContainer" containerID="f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2" Feb 16 11:01:46 crc kubenswrapper[4814]: E0216 11:01:46.657931 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2\": container with ID starting with f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2 not found: ID does not exist" containerID="f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.657977 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2"} err="failed to get container status \"f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2\": rpc error: code = NotFound desc = could not find container \"f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2\": container with ID starting with f10a02f88c294ea5a52cadef15d1873c33f608eeadd8377180e876fd097aa8f2 not found: ID does not exist" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.658009 4814 scope.go:117] "RemoveContainer" containerID="1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8" Feb 16 11:01:46 crc kubenswrapper[4814]: E0216 11:01:46.658380 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8\": container with ID starting with 1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8 not found: ID does not exist" containerID="1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.658437 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8"} err="failed to get container status \"1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8\": rpc error: code = NotFound desc = could not find container \"1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8\": container with ID starting with 1ada1ac3d78a4b8d6c3d1ba50472912653b2edb244bbbae40b6dab87dc6962f8 not found: ID does not exist" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.658461 4814 scope.go:117] "RemoveContainer" containerID="8217e5339005d79ee8f22c446a965b930c7b8ec01859cd4e0db110634435f374" Feb 16 11:01:46 crc kubenswrapper[4814]: E0216 11:01:46.658813 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8217e5339005d79ee8f22c446a965b930c7b8ec01859cd4e0db110634435f374\": container with ID starting with 8217e5339005d79ee8f22c446a965b930c7b8ec01859cd4e0db110634435f374 not found: ID does not exist" containerID="8217e5339005d79ee8f22c446a965b930c7b8ec01859cd4e0db110634435f374" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.658836 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8217e5339005d79ee8f22c446a965b930c7b8ec01859cd4e0db110634435f374"} err="failed to get container status \"8217e5339005d79ee8f22c446a965b930c7b8ec01859cd4e0db110634435f374\": rpc error: code = NotFound desc = could not find container \"8217e5339005d79ee8f22c446a965b930c7b8ec01859cd4e0db110634435f374\": container with ID starting with 8217e5339005d79ee8f22c446a965b930c7b8ec01859cd4e0db110634435f374 not found: ID does not exist" Feb 16 11:01:46 crc kubenswrapper[4814]: I0216 11:01:46.993915 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:01:46 crc kubenswrapper[4814]: E0216 11:01:46.994404 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:01:47 crc kubenswrapper[4814]: I0216 11:01:47.011276 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" path="/var/lib/kubelet/pods/dd36fcd3-eb61-49ac-860e-252ea832f1c3/volumes" Feb 16 11:01:53 crc kubenswrapper[4814]: I0216 11:01:53.994021 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:01:53 crc kubenswrapper[4814]: E0216 11:01:53.994795 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:02:01 crc kubenswrapper[4814]: I0216 11:02:01.993078 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:02:01 crc kubenswrapper[4814]: E0216 11:02:01.993852 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:02:07 crc kubenswrapper[4814]: I0216 11:02:07.994404 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:02:07 crc kubenswrapper[4814]: E0216 11:02:07.995367 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:02:13 crc kubenswrapper[4814]: I0216 11:02:13.993255 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:02:13 crc kubenswrapper[4814]: E0216 11:02:13.993966 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:02:19 crc kubenswrapper[4814]: I0216 11:02:19.994021 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:02:19 crc kubenswrapper[4814]: E0216 11:02:19.994800 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:02:25 crc kubenswrapper[4814]: I0216 11:02:25.994343 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:02:25 crc kubenswrapper[4814]: E0216 11:02:25.995785 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:02:31 crc kubenswrapper[4814]: I0216 11:02:31.993446 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:02:31 crc kubenswrapper[4814]: E0216 11:02:31.994152 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:02:38 crc kubenswrapper[4814]: I0216 11:02:38.993949 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:02:38 crc kubenswrapper[4814]: E0216 11:02:38.994809 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:02:43 crc kubenswrapper[4814]: I0216 11:02:43.004695 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:02:43 crc kubenswrapper[4814]: E0216 11:02:43.006048 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:02:53 crc kubenswrapper[4814]: I0216 11:02:53.993934 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:02:53 crc kubenswrapper[4814]: E0216 11:02:53.996190 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:02:55 crc kubenswrapper[4814]: I0216 11:02:55.994916 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:02:55 crc kubenswrapper[4814]: E0216 11:02:55.996681 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:03:06 crc kubenswrapper[4814]: I0216 11:03:06.994616 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:03:06 crc kubenswrapper[4814]: E0216 11:03:06.995950 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:03:08 crc kubenswrapper[4814]: I0216 11:03:08.993872 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:03:08 crc kubenswrapper[4814]: E0216 11:03:08.994278 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:03:21 crc kubenswrapper[4814]: I0216 11:03:21.993462 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:03:21 crc kubenswrapper[4814]: E0216 11:03:21.994278 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:03:22 crc kubenswrapper[4814]: I0216 11:03:22.999171 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:03:23 crc kubenswrapper[4814]: E0216 11:03:22.999774 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:03:35 crc kubenswrapper[4814]: I0216 11:03:35.994776 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:03:35 crc kubenswrapper[4814]: E0216 11:03:35.995661 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:03:36 crc kubenswrapper[4814]: I0216 11:03:36.994888 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:03:36 crc kubenswrapper[4814]: E0216 11:03:36.995927 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:03:46 crc kubenswrapper[4814]: I0216 11:03:46.994840 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:03:46 crc kubenswrapper[4814]: E0216 11:03:46.995776 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:03:50 crc kubenswrapper[4814]: I0216 11:03:50.013227 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:03:50 crc kubenswrapper[4814]: E0216 11:03:50.014578 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:03:58 crc kubenswrapper[4814]: I0216 11:03:58.994375 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:03:58 crc kubenswrapper[4814]: E0216 11:03:58.995487 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:04:03 crc kubenswrapper[4814]: I0216 11:04:03.018578 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:04:03 crc kubenswrapper[4814]: E0216 11:04:03.019814 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:04:10 crc kubenswrapper[4814]: I0216 11:04:10.993859 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:04:10 crc kubenswrapper[4814]: E0216 11:04:10.994776 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:04:13 crc kubenswrapper[4814]: I0216 11:04:13.993524 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:04:13 crc kubenswrapper[4814]: E0216 11:04:13.994377 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:04:23 crc kubenswrapper[4814]: I0216 11:04:23.006702 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:04:23 crc kubenswrapper[4814]: E0216 11:04:23.007943 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:04:27 crc kubenswrapper[4814]: I0216 11:04:27.993698 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:04:27 crc kubenswrapper[4814]: E0216 11:04:27.994428 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:04:35 crc kubenswrapper[4814]: I0216 11:04:35.994189 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:04:35 crc kubenswrapper[4814]: E0216 11:04:35.995080 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:04:43 crc kubenswrapper[4814]: I0216 11:04:43.003592 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:04:43 crc kubenswrapper[4814]: E0216 11:04:43.004556 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:04:47 crc kubenswrapper[4814]: I0216 11:04:47.994389 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:04:47 crc kubenswrapper[4814]: E0216 11:04:47.995570 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:04:55 crc kubenswrapper[4814]: I0216 11:04:55.994860 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:04:55 crc kubenswrapper[4814]: E0216 11:04:55.995969 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:05:00 crc kubenswrapper[4814]: I0216 11:05:00.993922 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:05:00 crc kubenswrapper[4814]: E0216 11:05:00.994672 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:05:06 crc kubenswrapper[4814]: I0216 11:05:06.994354 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:05:06 crc kubenswrapper[4814]: E0216 11:05:06.995497 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:05:13 crc kubenswrapper[4814]: I0216 11:05:13.010314 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:05:13 crc kubenswrapper[4814]: E0216 11:05:13.011215 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.065101 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9t2fp/must-gather-r6b42"] Feb 16 11:05:20 crc kubenswrapper[4814]: E0216 11:05:20.066300 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerName="extract-content" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.066327 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerName="extract-content" Feb 16 11:05:20 crc kubenswrapper[4814]: E0216 11:05:20.066339 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aed429b-c00d-4fec-9b08-09316afc908b" containerName="extract-utilities" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.066413 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aed429b-c00d-4fec-9b08-09316afc908b" containerName="extract-utilities" Feb 16 11:05:20 crc kubenswrapper[4814]: E0216 11:05:20.066430 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aed429b-c00d-4fec-9b08-09316afc908b" containerName="extract-content" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.066438 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aed429b-c00d-4fec-9b08-09316afc908b" containerName="extract-content" Feb 16 11:05:20 crc kubenswrapper[4814]: E0216 11:05:20.066453 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aed429b-c00d-4fec-9b08-09316afc908b" containerName="registry-server" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.066460 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aed429b-c00d-4fec-9b08-09316afc908b" containerName="registry-server" Feb 16 11:05:20 crc kubenswrapper[4814]: E0216 11:05:20.066513 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerName="extract-utilities" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.066522 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerName="extract-utilities" Feb 16 11:05:20 crc kubenswrapper[4814]: E0216 11:05:20.066626 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerName="registry-server" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.066637 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerName="registry-server" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.066968 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd36fcd3-eb61-49ac-860e-252ea832f1c3" containerName="registry-server" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.067006 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aed429b-c00d-4fec-9b08-09316afc908b" containerName="registry-server" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.068957 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/must-gather-r6b42" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.071641 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9t2fp"/"openshift-service-ca.crt" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.071641 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9t2fp"/"kube-root-ca.crt" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.071874 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9t2fp"/"default-dockercfg-plfdx" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.083610 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9t2fp/must-gather-r6b42"] Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.160685 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfpcw\" (UniqueName: \"kubernetes.io/projected/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-kube-api-access-pfpcw\") pod \"must-gather-r6b42\" (UID: \"47ffdfe2-41b9-4fa3-abb9-ba4f11507038\") " pod="openshift-must-gather-9t2fp/must-gather-r6b42" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.160732 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-must-gather-output\") pod \"must-gather-r6b42\" (UID: \"47ffdfe2-41b9-4fa3-abb9-ba4f11507038\") " pod="openshift-must-gather-9t2fp/must-gather-r6b42" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.263308 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfpcw\" (UniqueName: \"kubernetes.io/projected/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-kube-api-access-pfpcw\") pod \"must-gather-r6b42\" (UID: \"47ffdfe2-41b9-4fa3-abb9-ba4f11507038\") " pod="openshift-must-gather-9t2fp/must-gather-r6b42" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.263355 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-must-gather-output\") pod \"must-gather-r6b42\" (UID: \"47ffdfe2-41b9-4fa3-abb9-ba4f11507038\") " pod="openshift-must-gather-9t2fp/must-gather-r6b42" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.263909 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-must-gather-output\") pod \"must-gather-r6b42\" (UID: \"47ffdfe2-41b9-4fa3-abb9-ba4f11507038\") " pod="openshift-must-gather-9t2fp/must-gather-r6b42" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.292157 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfpcw\" (UniqueName: \"kubernetes.io/projected/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-kube-api-access-pfpcw\") pod \"must-gather-r6b42\" (UID: \"47ffdfe2-41b9-4fa3-abb9-ba4f11507038\") " pod="openshift-must-gather-9t2fp/must-gather-r6b42" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.394865 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/must-gather-r6b42" Feb 16 11:05:20 crc kubenswrapper[4814]: I0216 11:05:20.922158 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9t2fp/must-gather-r6b42"] Feb 16 11:05:21 crc kubenswrapper[4814]: I0216 11:05:21.811219 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9t2fp/must-gather-r6b42" event={"ID":"47ffdfe2-41b9-4fa3-abb9-ba4f11507038","Type":"ContainerStarted","Data":"35ec04564390e7912ea5b2b6142bf6a47773b52ac261fb84b2dcee5f88a5f508"} Feb 16 11:05:21 crc kubenswrapper[4814]: I0216 11:05:21.994202 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:05:21 crc kubenswrapper[4814]: E0216 11:05:21.994442 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:05:26 crc kubenswrapper[4814]: I0216 11:05:26.994687 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:05:26 crc kubenswrapper[4814]: E0216 11:05:26.995454 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:05:28 crc kubenswrapper[4814]: I0216 11:05:28.871812 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9t2fp/must-gather-r6b42" event={"ID":"47ffdfe2-41b9-4fa3-abb9-ba4f11507038","Type":"ContainerStarted","Data":"df5817f734df2913d637c144026ed471b3b10c1a6ce10c0984ac40129ec21736"} Feb 16 11:05:28 crc kubenswrapper[4814]: I0216 11:05:28.873288 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9t2fp/must-gather-r6b42" event={"ID":"47ffdfe2-41b9-4fa3-abb9-ba4f11507038","Type":"ContainerStarted","Data":"5595f4e8c378484cafa42f0033eb5369b218c182d492b5113843eeb88bf3f4dd"} Feb 16 11:05:28 crc kubenswrapper[4814]: I0216 11:05:28.894907 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9t2fp/must-gather-r6b42" podStartSLOduration=1.952480019 podStartE2EDuration="8.894886977s" podCreationTimestamp="2026-02-16 11:05:20 +0000 UTC" firstStartedPulling="2026-02-16 11:05:20.915352481 +0000 UTC m=+4778.608508661" lastFinishedPulling="2026-02-16 11:05:27.857759439 +0000 UTC m=+4785.550915619" observedRunningTime="2026-02-16 11:05:28.892722789 +0000 UTC m=+4786.585878969" watchObservedRunningTime="2026-02-16 11:05:28.894886977 +0000 UTC m=+4786.588043177" Feb 16 11:05:33 crc kubenswrapper[4814]: I0216 11:05:33.412915 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9t2fp/crc-debug-2fmjx"] Feb 16 11:05:33 crc kubenswrapper[4814]: I0216 11:05:33.415112 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" Feb 16 11:05:33 crc kubenswrapper[4814]: I0216 11:05:33.554291 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkfdh\" (UniqueName: \"kubernetes.io/projected/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-kube-api-access-zkfdh\") pod \"crc-debug-2fmjx\" (UID: \"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3\") " pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" Feb 16 11:05:33 crc kubenswrapper[4814]: I0216 11:05:33.554548 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-host\") pod \"crc-debug-2fmjx\" (UID: \"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3\") " pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" Feb 16 11:05:33 crc kubenswrapper[4814]: I0216 11:05:33.659941 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-host\") pod \"crc-debug-2fmjx\" (UID: \"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3\") " pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" Feb 16 11:05:33 crc kubenswrapper[4814]: I0216 11:05:33.660038 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkfdh\" (UniqueName: \"kubernetes.io/projected/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-kube-api-access-zkfdh\") pod \"crc-debug-2fmjx\" (UID: \"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3\") " pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" Feb 16 11:05:33 crc kubenswrapper[4814]: I0216 11:05:33.660051 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-host\") pod \"crc-debug-2fmjx\" (UID: \"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3\") " pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" Feb 16 11:05:33 crc kubenswrapper[4814]: I0216 11:05:33.679431 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkfdh\" (UniqueName: \"kubernetes.io/projected/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-kube-api-access-zkfdh\") pod \"crc-debug-2fmjx\" (UID: \"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3\") " pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" Feb 16 11:05:33 crc kubenswrapper[4814]: I0216 11:05:33.732767 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" Feb 16 11:05:33 crc kubenswrapper[4814]: I0216 11:05:33.910211 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" event={"ID":"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3","Type":"ContainerStarted","Data":"638d4849c12130a4ecdaeed327af90bcbbafc3a07a9c107434f09c1422f60ca7"} Feb 16 11:05:33 crc kubenswrapper[4814]: I0216 11:05:33.994095 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:05:33 crc kubenswrapper[4814]: E0216 11:05:33.994413 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:05:37 crc kubenswrapper[4814]: I0216 11:05:37.994473 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:05:37 crc kubenswrapper[4814]: E0216 11:05:37.996585 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:05:45 crc kubenswrapper[4814]: I0216 11:05:45.994050 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:05:45 crc kubenswrapper[4814]: E0216 11:05:45.994735 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:05:46 crc kubenswrapper[4814]: I0216 11:05:46.039940 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" event={"ID":"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3","Type":"ContainerStarted","Data":"469b02fe3d7a7db4374d88498accb3ce14257751e858b19b7e9a5622634be9de"} Feb 16 11:05:46 crc kubenswrapper[4814]: I0216 11:05:46.060716 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" podStartSLOduration=1.546768058 podStartE2EDuration="13.060700583s" podCreationTimestamp="2026-02-16 11:05:33 +0000 UTC" firstStartedPulling="2026-02-16 11:05:33.776689 +0000 UTC m=+4791.469845180" lastFinishedPulling="2026-02-16 11:05:45.290621515 +0000 UTC m=+4802.983777705" observedRunningTime="2026-02-16 11:05:46.05545583 +0000 UTC m=+4803.748612010" watchObservedRunningTime="2026-02-16 11:05:46.060700583 +0000 UTC m=+4803.753856763" Feb 16 11:05:51 crc kubenswrapper[4814]: I0216 11:05:51.994857 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:05:51 crc kubenswrapper[4814]: E0216 11:05:51.996648 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:05:56 crc kubenswrapper[4814]: I0216 11:05:56.993623 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:05:56 crc kubenswrapper[4814]: E0216 11:05:56.994413 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:06:01 crc kubenswrapper[4814]: I0216 11:06:01.170831 4814 generic.go:334] "Generic (PLEG): container finished" podID="6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3" containerID="469b02fe3d7a7db4374d88498accb3ce14257751e858b19b7e9a5622634be9de" exitCode=0 Feb 16 11:06:01 crc kubenswrapper[4814]: I0216 11:06:01.170922 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" event={"ID":"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3","Type":"ContainerDied","Data":"469b02fe3d7a7db4374d88498accb3ce14257751e858b19b7e9a5622634be9de"} Feb 16 11:06:02 crc kubenswrapper[4814]: I0216 11:06:02.303105 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" Feb 16 11:06:02 crc kubenswrapper[4814]: I0216 11:06:02.334232 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9t2fp/crc-debug-2fmjx"] Feb 16 11:06:02 crc kubenswrapper[4814]: I0216 11:06:02.341441 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9t2fp/crc-debug-2fmjx"] Feb 16 11:06:02 crc kubenswrapper[4814]: I0216 11:06:02.354821 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkfdh\" (UniqueName: \"kubernetes.io/projected/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-kube-api-access-zkfdh\") pod \"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3\" (UID: \"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3\") " Feb 16 11:06:02 crc kubenswrapper[4814]: I0216 11:06:02.354929 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-host\") pod \"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3\" (UID: \"6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3\") " Feb 16 11:06:02 crc kubenswrapper[4814]: I0216 11:06:02.355057 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-host" (OuterVolumeSpecName: "host") pod "6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3" (UID: "6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:06:02 crc kubenswrapper[4814]: I0216 11:06:02.355510 4814 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-host\") on node \"crc\" DevicePath \"\"" Feb 16 11:06:02 crc kubenswrapper[4814]: I0216 11:06:02.364967 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-kube-api-access-zkfdh" (OuterVolumeSpecName: "kube-api-access-zkfdh") pod "6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3" (UID: "6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3"). InnerVolumeSpecName "kube-api-access-zkfdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:06:02 crc kubenswrapper[4814]: I0216 11:06:02.456471 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkfdh\" (UniqueName: \"kubernetes.io/projected/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3-kube-api-access-zkfdh\") on node \"crc\" DevicePath \"\"" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.012766 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3" path="/var/lib/kubelet/pods/6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3/volumes" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.192512 4814 scope.go:117] "RemoveContainer" containerID="469b02fe3d7a7db4374d88498accb3ce14257751e858b19b7e9a5622634be9de" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.192681 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/crc-debug-2fmjx" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.532099 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9t2fp/crc-debug-j247r"] Feb 16 11:06:03 crc kubenswrapper[4814]: E0216 11:06:03.533514 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3" containerName="container-00" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.533637 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3" containerName="container-00" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.533899 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a71e91c-39c5-4336-a73c-e6bb9a2a2ea3" containerName="container-00" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.534940 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/crc-debug-j247r" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.577169 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36f844f1-65cb-41f1-89d2-491fc6a03fca-host\") pod \"crc-debug-j247r\" (UID: \"36f844f1-65cb-41f1-89d2-491fc6a03fca\") " pod="openshift-must-gather-9t2fp/crc-debug-j247r" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.577477 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvrgv\" (UniqueName: \"kubernetes.io/projected/36f844f1-65cb-41f1-89d2-491fc6a03fca-kube-api-access-pvrgv\") pod \"crc-debug-j247r\" (UID: \"36f844f1-65cb-41f1-89d2-491fc6a03fca\") " pod="openshift-must-gather-9t2fp/crc-debug-j247r" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.678292 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36f844f1-65cb-41f1-89d2-491fc6a03fca-host\") pod \"crc-debug-j247r\" (UID: \"36f844f1-65cb-41f1-89d2-491fc6a03fca\") " pod="openshift-must-gather-9t2fp/crc-debug-j247r" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.678636 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvrgv\" (UniqueName: \"kubernetes.io/projected/36f844f1-65cb-41f1-89d2-491fc6a03fca-kube-api-access-pvrgv\") pod \"crc-debug-j247r\" (UID: \"36f844f1-65cb-41f1-89d2-491fc6a03fca\") " pod="openshift-must-gather-9t2fp/crc-debug-j247r" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.678481 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36f844f1-65cb-41f1-89d2-491fc6a03fca-host\") pod \"crc-debug-j247r\" (UID: \"36f844f1-65cb-41f1-89d2-491fc6a03fca\") " pod="openshift-must-gather-9t2fp/crc-debug-j247r" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.699420 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvrgv\" (UniqueName: \"kubernetes.io/projected/36f844f1-65cb-41f1-89d2-491fc6a03fca-kube-api-access-pvrgv\") pod \"crc-debug-j247r\" (UID: \"36f844f1-65cb-41f1-89d2-491fc6a03fca\") " pod="openshift-must-gather-9t2fp/crc-debug-j247r" Feb 16 11:06:03 crc kubenswrapper[4814]: I0216 11:06:03.860137 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/crc-debug-j247r" Feb 16 11:06:03 crc kubenswrapper[4814]: W0216 11:06:03.896815 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36f844f1_65cb_41f1_89d2_491fc6a03fca.slice/crio-37f6b7cc7e581f356dcd23ff4bce781f818f8f9b536fc4890afe5e4cd627a79c WatchSource:0}: Error finding container 37f6b7cc7e581f356dcd23ff4bce781f818f8f9b536fc4890afe5e4cd627a79c: Status 404 returned error can't find the container with id 37f6b7cc7e581f356dcd23ff4bce781f818f8f9b536fc4890afe5e4cd627a79c Feb 16 11:06:04 crc kubenswrapper[4814]: I0216 11:06:04.212498 4814 generic.go:334] "Generic (PLEG): container finished" podID="36f844f1-65cb-41f1-89d2-491fc6a03fca" containerID="1c59bf4f42ab5314c40e071475e3375170ab174549992dc7eb753500ba16eb75" exitCode=1 Feb 16 11:06:04 crc kubenswrapper[4814]: I0216 11:06:04.212696 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9t2fp/crc-debug-j247r" event={"ID":"36f844f1-65cb-41f1-89d2-491fc6a03fca","Type":"ContainerDied","Data":"1c59bf4f42ab5314c40e071475e3375170ab174549992dc7eb753500ba16eb75"} Feb 16 11:06:04 crc kubenswrapper[4814]: I0216 11:06:04.212821 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9t2fp/crc-debug-j247r" event={"ID":"36f844f1-65cb-41f1-89d2-491fc6a03fca","Type":"ContainerStarted","Data":"37f6b7cc7e581f356dcd23ff4bce781f818f8f9b536fc4890afe5e4cd627a79c"} Feb 16 11:06:04 crc kubenswrapper[4814]: I0216 11:06:04.250982 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9t2fp/crc-debug-j247r"] Feb 16 11:06:04 crc kubenswrapper[4814]: I0216 11:06:04.265723 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9t2fp/crc-debug-j247r"] Feb 16 11:06:05 crc kubenswrapper[4814]: I0216 11:06:05.323902 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/crc-debug-j247r" Feb 16 11:06:05 crc kubenswrapper[4814]: I0216 11:06:05.408973 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvrgv\" (UniqueName: \"kubernetes.io/projected/36f844f1-65cb-41f1-89d2-491fc6a03fca-kube-api-access-pvrgv\") pod \"36f844f1-65cb-41f1-89d2-491fc6a03fca\" (UID: \"36f844f1-65cb-41f1-89d2-491fc6a03fca\") " Feb 16 11:06:05 crc kubenswrapper[4814]: I0216 11:06:05.409109 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36f844f1-65cb-41f1-89d2-491fc6a03fca-host\") pod \"36f844f1-65cb-41f1-89d2-491fc6a03fca\" (UID: \"36f844f1-65cb-41f1-89d2-491fc6a03fca\") " Feb 16 11:06:05 crc kubenswrapper[4814]: I0216 11:06:05.409614 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36f844f1-65cb-41f1-89d2-491fc6a03fca-host" (OuterVolumeSpecName: "host") pod "36f844f1-65cb-41f1-89d2-491fc6a03fca" (UID: "36f844f1-65cb-41f1-89d2-491fc6a03fca"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:06:05 crc kubenswrapper[4814]: I0216 11:06:05.422676 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f844f1-65cb-41f1-89d2-491fc6a03fca-kube-api-access-pvrgv" (OuterVolumeSpecName: "kube-api-access-pvrgv") pod "36f844f1-65cb-41f1-89d2-491fc6a03fca" (UID: "36f844f1-65cb-41f1-89d2-491fc6a03fca"). InnerVolumeSpecName "kube-api-access-pvrgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:06:05 crc kubenswrapper[4814]: I0216 11:06:05.510484 4814 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36f844f1-65cb-41f1-89d2-491fc6a03fca-host\") on node \"crc\" DevicePath \"\"" Feb 16 11:06:05 crc kubenswrapper[4814]: I0216 11:06:05.510851 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvrgv\" (UniqueName: \"kubernetes.io/projected/36f844f1-65cb-41f1-89d2-491fc6a03fca-kube-api-access-pvrgv\") on node \"crc\" DevicePath \"\"" Feb 16 11:06:05 crc kubenswrapper[4814]: I0216 11:06:05.994611 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:06:06 crc kubenswrapper[4814]: I0216 11:06:06.233932 4814 scope.go:117] "RemoveContainer" containerID="1c59bf4f42ab5314c40e071475e3375170ab174549992dc7eb753500ba16eb75" Feb 16 11:06:06 crc kubenswrapper[4814]: I0216 11:06:06.234099 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/crc-debug-j247r" Feb 16 11:06:07 crc kubenswrapper[4814]: I0216 11:06:07.006233 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f844f1-65cb-41f1-89d2-491fc6a03fca" path="/var/lib/kubelet/pods/36f844f1-65cb-41f1-89d2-491fc6a03fca/volumes" Feb 16 11:06:07 crc kubenswrapper[4814]: I0216 11:06:07.246790 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c"} Feb 16 11:06:07 crc kubenswrapper[4814]: I0216 11:06:07.677664 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 11:06:09 crc kubenswrapper[4814]: I0216 11:06:09.993352 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:06:09 crc kubenswrapper[4814]: E0216 11:06:09.994311 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:06:10 crc kubenswrapper[4814]: I0216 11:06:10.276911 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" exitCode=0 Feb 16 11:06:10 crc kubenswrapper[4814]: I0216 11:06:10.276949 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c"} Feb 16 11:06:10 crc kubenswrapper[4814]: I0216 11:06:10.277010 4814 scope.go:117] "RemoveContainer" containerID="7ff7b7a99ae70fa4019597da50538e8e812279ad751f25ac3c9c1bc434db183c" Feb 16 11:06:10 crc kubenswrapper[4814]: I0216 11:06:10.277766 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:06:10 crc kubenswrapper[4814]: E0216 11:06:10.278095 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:06:12 crc kubenswrapper[4814]: I0216 11:06:12.676869 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 11:06:12 crc kubenswrapper[4814]: I0216 11:06:12.677443 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 11:06:12 crc kubenswrapper[4814]: I0216 11:06:12.678293 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:06:12 crc kubenswrapper[4814]: E0216 11:06:12.678722 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:06:13 crc kubenswrapper[4814]: I0216 11:06:13.305075 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:06:13 crc kubenswrapper[4814]: E0216 11:06:13.305497 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:06:22 crc kubenswrapper[4814]: I0216 11:06:22.998634 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:06:23 crc kubenswrapper[4814]: E0216 11:06:22.999369 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:06:23 crc kubenswrapper[4814]: I0216 11:06:23.995975 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:06:23 crc kubenswrapper[4814]: E0216 11:06:23.996296 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.132725 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qfr87"] Feb 16 11:06:24 crc kubenswrapper[4814]: E0216 11:06:24.133436 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f844f1-65cb-41f1-89d2-491fc6a03fca" containerName="container-00" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.133452 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f844f1-65cb-41f1-89d2-491fc6a03fca" containerName="container-00" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.133759 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f844f1-65cb-41f1-89d2-491fc6a03fca" containerName="container-00" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.135847 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.155730 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfr87"] Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.276875 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwh8s\" (UniqueName: \"kubernetes.io/projected/457c21de-be23-4c53-9c5e-9cf705f83d53-kube-api-access-dwh8s\") pod \"redhat-marketplace-qfr87\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.276996 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-utilities\") pod \"redhat-marketplace-qfr87\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.277184 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-catalog-content\") pod \"redhat-marketplace-qfr87\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.378840 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-utilities\") pod \"redhat-marketplace-qfr87\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.378912 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-catalog-content\") pod \"redhat-marketplace-qfr87\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.379030 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwh8s\" (UniqueName: \"kubernetes.io/projected/457c21de-be23-4c53-9c5e-9cf705f83d53-kube-api-access-dwh8s\") pod \"redhat-marketplace-qfr87\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.379507 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-utilities\") pod \"redhat-marketplace-qfr87\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.379841 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-catalog-content\") pod \"redhat-marketplace-qfr87\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.402145 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwh8s\" (UniqueName: \"kubernetes.io/projected/457c21de-be23-4c53-9c5e-9cf705f83d53-kube-api-access-dwh8s\") pod \"redhat-marketplace-qfr87\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:24 crc kubenswrapper[4814]: I0216 11:06:24.466167 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:25 crc kubenswrapper[4814]: I0216 11:06:25.038491 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfr87"] Feb 16 11:06:25 crc kubenswrapper[4814]: I0216 11:06:25.427462 4814 generic.go:334] "Generic (PLEG): container finished" podID="457c21de-be23-4c53-9c5e-9cf705f83d53" containerID="0834cadf6c43129de4f837bf5e5d84ed83b3548ac1350d7ba7f679826661bf27" exitCode=0 Feb 16 11:06:25 crc kubenswrapper[4814]: I0216 11:06:25.427611 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfr87" event={"ID":"457c21de-be23-4c53-9c5e-9cf705f83d53","Type":"ContainerDied","Data":"0834cadf6c43129de4f837bf5e5d84ed83b3548ac1350d7ba7f679826661bf27"} Feb 16 11:06:25 crc kubenswrapper[4814]: I0216 11:06:25.427636 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfr87" event={"ID":"457c21de-be23-4c53-9c5e-9cf705f83d53","Type":"ContainerStarted","Data":"083cf90b482eecf13d7d6e05eb76058ce3f030d6c12edfeaa73d03b2d7f810dd"} Feb 16 11:06:25 crc kubenswrapper[4814]: I0216 11:06:25.431292 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:06:26 crc kubenswrapper[4814]: I0216 11:06:26.439185 4814 generic.go:334] "Generic (PLEG): container finished" podID="457c21de-be23-4c53-9c5e-9cf705f83d53" containerID="4083569c4a143690ea68cf28c4569d57bd46cbfb512db09a8e5681f201763798" exitCode=0 Feb 16 11:06:26 crc kubenswrapper[4814]: I0216 11:06:26.439273 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfr87" event={"ID":"457c21de-be23-4c53-9c5e-9cf705f83d53","Type":"ContainerDied","Data":"4083569c4a143690ea68cf28c4569d57bd46cbfb512db09a8e5681f201763798"} Feb 16 11:06:28 crc kubenswrapper[4814]: I0216 11:06:28.460029 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfr87" event={"ID":"457c21de-be23-4c53-9c5e-9cf705f83d53","Type":"ContainerStarted","Data":"26189864353fee350856cf92f7b2f8adbb390980dfe3eeb4f99b638f537f97e6"} Feb 16 11:06:28 crc kubenswrapper[4814]: I0216 11:06:28.482819 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qfr87" podStartSLOduration=3.020799057 podStartE2EDuration="4.482798587s" podCreationTimestamp="2026-02-16 11:06:24 +0000 UTC" firstStartedPulling="2026-02-16 11:06:25.429958693 +0000 UTC m=+4843.123114873" lastFinishedPulling="2026-02-16 11:06:26.891958223 +0000 UTC m=+4844.585114403" observedRunningTime="2026-02-16 11:06:28.476152937 +0000 UTC m=+4846.169309127" watchObservedRunningTime="2026-02-16 11:06:28.482798587 +0000 UTC m=+4846.175954777" Feb 16 11:06:34 crc kubenswrapper[4814]: I0216 11:06:34.467077 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:34 crc kubenswrapper[4814]: I0216 11:06:34.467914 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:34 crc kubenswrapper[4814]: I0216 11:06:34.522209 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:35 crc kubenswrapper[4814]: I0216 11:06:35.566215 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:35 crc kubenswrapper[4814]: I0216 11:06:35.621562 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfr87"] Feb 16 11:06:36 crc kubenswrapper[4814]: I0216 11:06:36.993932 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:06:36 crc kubenswrapper[4814]: E0216 11:06:36.994213 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:06:37 crc kubenswrapper[4814]: I0216 11:06:37.540816 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qfr87" podUID="457c21de-be23-4c53-9c5e-9cf705f83d53" containerName="registry-server" containerID="cri-o://26189864353fee350856cf92f7b2f8adbb390980dfe3eeb4f99b638f537f97e6" gracePeriod=2 Feb 16 11:06:37 crc kubenswrapper[4814]: I0216 11:06:37.993905 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:06:37 crc kubenswrapper[4814]: E0216 11:06:37.994182 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:06:38 crc kubenswrapper[4814]: I0216 11:06:38.551447 4814 generic.go:334] "Generic (PLEG): container finished" podID="457c21de-be23-4c53-9c5e-9cf705f83d53" containerID="26189864353fee350856cf92f7b2f8adbb390980dfe3eeb4f99b638f537f97e6" exitCode=0 Feb 16 11:06:38 crc kubenswrapper[4814]: I0216 11:06:38.551487 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfr87" event={"ID":"457c21de-be23-4c53-9c5e-9cf705f83d53","Type":"ContainerDied","Data":"26189864353fee350856cf92f7b2f8adbb390980dfe3eeb4f99b638f537f97e6"} Feb 16 11:06:38 crc kubenswrapper[4814]: I0216 11:06:38.966000 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.101788 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwh8s\" (UniqueName: \"kubernetes.io/projected/457c21de-be23-4c53-9c5e-9cf705f83d53-kube-api-access-dwh8s\") pod \"457c21de-be23-4c53-9c5e-9cf705f83d53\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.101937 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-catalog-content\") pod \"457c21de-be23-4c53-9c5e-9cf705f83d53\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.101979 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-utilities\") pod \"457c21de-be23-4c53-9c5e-9cf705f83d53\" (UID: \"457c21de-be23-4c53-9c5e-9cf705f83d53\") " Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.103051 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-utilities" (OuterVolumeSpecName: "utilities") pod "457c21de-be23-4c53-9c5e-9cf705f83d53" (UID: "457c21de-be23-4c53-9c5e-9cf705f83d53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.118707 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457c21de-be23-4c53-9c5e-9cf705f83d53-kube-api-access-dwh8s" (OuterVolumeSpecName: "kube-api-access-dwh8s") pod "457c21de-be23-4c53-9c5e-9cf705f83d53" (UID: "457c21de-be23-4c53-9c5e-9cf705f83d53"). InnerVolumeSpecName "kube-api-access-dwh8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.125407 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "457c21de-be23-4c53-9c5e-9cf705f83d53" (UID: "457c21de-be23-4c53-9c5e-9cf705f83d53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.205149 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwh8s\" (UniqueName: \"kubernetes.io/projected/457c21de-be23-4c53-9c5e-9cf705f83d53-kube-api-access-dwh8s\") on node \"crc\" DevicePath \"\"" Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.205208 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.205221 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/457c21de-be23-4c53-9c5e-9cf705f83d53-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.562891 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfr87" event={"ID":"457c21de-be23-4c53-9c5e-9cf705f83d53","Type":"ContainerDied","Data":"083cf90b482eecf13d7d6e05eb76058ce3f030d6c12edfeaa73d03b2d7f810dd"} Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.563242 4814 scope.go:117] "RemoveContainer" containerID="26189864353fee350856cf92f7b2f8adbb390980dfe3eeb4f99b638f537f97e6" Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.562992 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfr87" Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.589879 4814 scope.go:117] "RemoveContainer" containerID="4083569c4a143690ea68cf28c4569d57bd46cbfb512db09a8e5681f201763798" Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.600105 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfr87"] Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.614786 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfr87"] Feb 16 11:06:39 crc kubenswrapper[4814]: I0216 11:06:39.689759 4814 scope.go:117] "RemoveContainer" containerID="0834cadf6c43129de4f837bf5e5d84ed83b3548ac1350d7ba7f679826661bf27" Feb 16 11:06:41 crc kubenswrapper[4814]: I0216 11:06:41.005419 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="457c21de-be23-4c53-9c5e-9cf705f83d53" path="/var/lib/kubelet/pods/457c21de-be23-4c53-9c5e-9cf705f83d53/volumes" Feb 16 11:06:48 crc kubenswrapper[4814]: I0216 11:06:48.994077 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:06:48 crc kubenswrapper[4814]: E0216 11:06:48.994877 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:06:49 crc kubenswrapper[4814]: I0216 11:06:49.994034 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:06:50 crc kubenswrapper[4814]: I0216 11:06:50.690230 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"6b750c07fe9ca28e7f9f87514229ecb4c19ab370da91481070c935919bef9205"} Feb 16 11:07:02 crc kubenswrapper[4814]: I0216 11:07:02.528298 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fc9647d64-z5jk2_3f3dfade-0392-451d-85d6-cf886a408bb4/barbican-api/0.log" Feb 16 11:07:02 crc kubenswrapper[4814]: I0216 11:07:02.564513 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-fc9647d64-z5jk2_3f3dfade-0392-451d-85d6-cf886a408bb4/barbican-api-log/0.log" Feb 16 11:07:02 crc kubenswrapper[4814]: I0216 11:07:02.723484 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5db6f4b556-hsqhl_f1ffe164-e3ac-43be-bd5a-c3c0aa75930a/barbican-keystone-listener/0.log" Feb 16 11:07:02 crc kubenswrapper[4814]: I0216 11:07:02.772749 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5db6f4b556-hsqhl_f1ffe164-e3ac-43be-bd5a-c3c0aa75930a/barbican-keystone-listener-log/0.log" Feb 16 11:07:02 crc kubenswrapper[4814]: I0216 11:07:02.817580 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cb9bb875f-gkglk_7ee17c93-aa03-460b-a8ca-9fbc19b6a23f/barbican-worker/0.log" Feb 16 11:07:02 crc kubenswrapper[4814]: I0216 11:07:02.946913 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cb9bb875f-gkglk_7ee17c93-aa03-460b-a8ca-9fbc19b6a23f/barbican-worker-log/0.log" Feb 16 11:07:03 crc kubenswrapper[4814]: I0216 11:07:03.134160 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_42c5d783-c68b-4e93-bfb3-1fe359b14e8a/ceilometer-notification-agent/0.log" Feb 16 11:07:03 crc kubenswrapper[4814]: I0216 11:07:03.136962 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_42c5d783-c68b-4e93-bfb3-1fe359b14e8a/ceilometer-central-agent/0.log" Feb 16 11:07:03 crc kubenswrapper[4814]: I0216 11:07:03.219618 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_42c5d783-c68b-4e93-bfb3-1fe359b14e8a/sg-core/0.log" Feb 16 11:07:03 crc kubenswrapper[4814]: I0216 11:07:03.231705 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_42c5d783-c68b-4e93-bfb3-1fe359b14e8a/proxy-httpd/0.log" Feb 16 11:07:03 crc kubenswrapper[4814]: I0216 11:07:03.532251 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5c2e92d0-a064-4611-9539-5dd4a4490eee/cinder-api/0.log" Feb 16 11:07:03 crc kubenswrapper[4814]: I0216 11:07:03.574402 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c4396e79-fda2-435d-ae1f-f92a838ea655/cinder-scheduler/16.log" Feb 16 11:07:03 crc kubenswrapper[4814]: I0216 11:07:03.740991 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c4396e79-fda2-435d-ae1f-f92a838ea655/probe/0.log" Feb 16 11:07:03 crc kubenswrapper[4814]: I0216 11:07:03.788245 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c4396e79-fda2-435d-ae1f-f92a838ea655/cinder-scheduler/16.log" Feb 16 11:07:03 crc kubenswrapper[4814]: I0216 11:07:03.828612 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5c2e92d0-a064-4611-9539-5dd4a4490eee/cinder-api-log/0.log" Feb 16 11:07:03 crc kubenswrapper[4814]: I0216 11:07:03.955891 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcc884bbc-bvmwv_974dd886-6966-4ab1-a46f-1c9a4973cb31/init/0.log" Feb 16 11:07:03 crc kubenswrapper[4814]: I0216 11:07:03.993749 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:07:03 crc kubenswrapper[4814]: E0216 11:07:03.994297 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:07:04 crc kubenswrapper[4814]: I0216 11:07:04.123564 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcc884bbc-bvmwv_974dd886-6966-4ab1-a46f-1c9a4973cb31/dnsmasq-dns/0.log" Feb 16 11:07:04 crc kubenswrapper[4814]: I0216 11:07:04.136466 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6bcc884bbc-bvmwv_974dd886-6966-4ab1-a46f-1c9a4973cb31/init/0.log" Feb 16 11:07:04 crc kubenswrapper[4814]: I0216 11:07:04.189120 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_77a0d3ee-2bcb-4733-89a8-b4525fc20768/glance-httpd/0.log" Feb 16 11:07:04 crc kubenswrapper[4814]: I0216 11:07:04.324598 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_77a0d3ee-2bcb-4733-89a8-b4525fc20768/glance-log/0.log" Feb 16 11:07:04 crc kubenswrapper[4814]: I0216 11:07:04.417568 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e69bc859-f4a3-4e24-92be-cbe76d3faee4/glance-log/0.log" Feb 16 11:07:04 crc kubenswrapper[4814]: I0216 11:07:04.426378 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e69bc859-f4a3-4e24-92be-cbe76d3faee4/glance-httpd/0.log" Feb 16 11:07:04 crc kubenswrapper[4814]: I0216 11:07:04.675872 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-76696f58b-dfzph_d4064477-94ed-4129-819b-63df1d34d227/horizon/0.log" Feb 16 11:07:04 crc kubenswrapper[4814]: I0216 11:07:04.888548 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29520661-zq9t9_96f45b84-c126-4555-ad31-189efbc1e60c/keystone-cron/0.log" Feb 16 11:07:04 crc kubenswrapper[4814]: I0216 11:07:04.908394 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-8669966799-gwc6g_d79bc6bd-dfc5-4058-a72c-f3d0bf05b8f6/keystone-api/0.log" Feb 16 11:07:05 crc kubenswrapper[4814]: I0216 11:07:05.178285 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a434fb2d-63b3-42cb-b686-b56870891b2c/kube-state-metrics/0.log" Feb 16 11:07:05 crc kubenswrapper[4814]: I0216 11:07:05.190948 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-76696f58b-dfzph_d4064477-94ed-4129-819b-63df1d34d227/horizon-log/0.log" Feb 16 11:07:05 crc kubenswrapper[4814]: I0216 11:07:05.433113 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-655ddb8b77-xt84d_6331cc3a-ed6b-4e28-8cb4-544f16da5f8e/neutron-httpd/0.log" Feb 16 11:07:05 crc kubenswrapper[4814]: I0216 11:07:05.499293 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-655ddb8b77-xt84d_6331cc3a-ed6b-4e28-8cb4-544f16da5f8e/neutron-api/0.log" Feb 16 11:07:05 crc kubenswrapper[4814]: I0216 11:07:05.644808 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_6a0b4bfb-2144-4fd9-be15-07396c44a11c/setup-container/0.log" Feb 16 11:07:05 crc kubenswrapper[4814]: I0216 11:07:05.880277 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_6a0b4bfb-2144-4fd9-be15-07396c44a11c/setup-container/0.log" Feb 16 11:07:05 crc kubenswrapper[4814]: I0216 11:07:05.925946 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_6a0b4bfb-2144-4fd9-be15-07396c44a11c/rabbitmq/0.log" Feb 16 11:07:06 crc kubenswrapper[4814]: I0216 11:07:06.291657 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_ff9316e8-e703-4057-8e8f-f01ac439748d/nova-cell0-conductor-conductor/0.log" Feb 16 11:07:06 crc kubenswrapper[4814]: I0216 11:07:06.296409 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_49fb4bc5-2f98-4711-9149-e5da0a515242/nova-api-log/0.log" Feb 16 11:07:06 crc kubenswrapper[4814]: I0216 11:07:06.427102 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_49fb4bc5-2f98-4711-9149-e5da0a515242/nova-api-api/0.log" Feb 16 11:07:06 crc kubenswrapper[4814]: I0216 11:07:06.654796 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4e1bc2b6-5ddd-4528-bd46-e63a868552dd/nova-cell1-conductor-conductor/0.log" Feb 16 11:07:06 crc kubenswrapper[4814]: I0216 11:07:06.804019 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_66ffd666-cd01-4fe7-b6a8-9c6a86abda53/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 11:07:06 crc kubenswrapper[4814]: I0216 11:07:06.852298 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4274c26d-1a79-40ad-a0ef-9322dc9007c6/nova-metadata-log/0.log" Feb 16 11:07:07 crc kubenswrapper[4814]: I0216 11:07:07.164919 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9dd72e1b-1b70-4e89-84eb-751cca377954/nova-scheduler-scheduler/0.log" Feb 16 11:07:07 crc kubenswrapper[4814]: I0216 11:07:07.556304 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_54151705-0c05-4e03-99d4-9dc9d4a37de7/mysql-bootstrap/0.log" Feb 16 11:07:07 crc kubenswrapper[4814]: I0216 11:07:07.776955 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_54151705-0c05-4e03-99d4-9dc9d4a37de7/galera/0.log" Feb 16 11:07:07 crc kubenswrapper[4814]: I0216 11:07:07.782334 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_54151705-0c05-4e03-99d4-9dc9d4a37de7/mysql-bootstrap/0.log" Feb 16 11:07:08 crc kubenswrapper[4814]: I0216 11:07:08.047038 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_43c73c4c-5cdf-4b6d-93b0-afeb459b74c1/mysql-bootstrap/0.log" Feb 16 11:07:08 crc kubenswrapper[4814]: I0216 11:07:08.205176 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_43c73c4c-5cdf-4b6d-93b0-afeb459b74c1/mysql-bootstrap/0.log" Feb 16 11:07:08 crc kubenswrapper[4814]: I0216 11:07:08.275675 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_43c73c4c-5cdf-4b6d-93b0-afeb459b74c1/galera/0.log" Feb 16 11:07:08 crc kubenswrapper[4814]: I0216 11:07:08.431338 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8a5610e4-be60-4c16-9911-e06986025235/openstackclient/0.log" Feb 16 11:07:08 crc kubenswrapper[4814]: I0216 11:07:08.568258 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dc2nv_7de6150f-ee9f-437c-8813-4255d2533e45/ovn-controller/0.log" Feb 16 11:07:08 crc kubenswrapper[4814]: I0216 11:07:08.672769 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_4274c26d-1a79-40ad-a0ef-9322dc9007c6/nova-metadata-metadata/0.log" Feb 16 11:07:08 crc kubenswrapper[4814]: I0216 11:07:08.809933 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-w6zt6_ce3c611b-9142-4702-a356-b22606f5b935/openstack-network-exporter/0.log" Feb 16 11:07:08 crc kubenswrapper[4814]: I0216 11:07:08.910236 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v6xwq_51879c30-795f-4f27-8018-fdafbafd8a4d/ovsdb-server-init/0.log" Feb 16 11:07:09 crc kubenswrapper[4814]: I0216 11:07:09.118742 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v6xwq_51879c30-795f-4f27-8018-fdafbafd8a4d/ovsdb-server-init/0.log" Feb 16 11:07:09 crc kubenswrapper[4814]: I0216 11:07:09.171023 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v6xwq_51879c30-795f-4f27-8018-fdafbafd8a4d/ovsdb-server/0.log" Feb 16 11:07:09 crc kubenswrapper[4814]: I0216 11:07:09.250523 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v6xwq_51879c30-795f-4f27-8018-fdafbafd8a4d/ovs-vswitchd/0.log" Feb 16 11:07:09 crc kubenswrapper[4814]: I0216 11:07:09.426716 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31/openstack-network-exporter/0.log" Feb 16 11:07:09 crc kubenswrapper[4814]: I0216 11:07:09.434483 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3fa2cf10-0d0f-4e1a-90ae-b500d90dcd31/ovn-northd/0.log" Feb 16 11:07:09 crc kubenswrapper[4814]: I0216 11:07:09.535369 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6eec7640-cb34-4716-90e6-36e4ba140f8f/openstack-network-exporter/0.log" Feb 16 11:07:09 crc kubenswrapper[4814]: I0216 11:07:09.684729 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6eec7640-cb34-4716-90e6-36e4ba140f8f/ovsdbserver-nb/0.log" Feb 16 11:07:09 crc kubenswrapper[4814]: I0216 11:07:09.829559 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_687aef9d-288e-47b4-9f5f-1ec1bd5b17f9/ovsdbserver-sb/0.log" Feb 16 11:07:09 crc kubenswrapper[4814]: I0216 11:07:09.834074 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_687aef9d-288e-47b4-9f5f-1ec1bd5b17f9/openstack-network-exporter/0.log" Feb 16 11:07:10 crc kubenswrapper[4814]: I0216 11:07:10.075318 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-b595588cb-jj9fp_a1c41c68-9785-42e3-aba9-ad9b36fc72d8/placement-api/0.log" Feb 16 11:07:10 crc kubenswrapper[4814]: I0216 11:07:10.142345 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-b595588cb-jj9fp_a1c41c68-9785-42e3-aba9-ad9b36fc72d8/placement-log/0.log" Feb 16 11:07:10 crc kubenswrapper[4814]: I0216 11:07:10.287829 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d64fe4ad-1b8d-4f94-b825-675bb6bd7f89/init-config-reloader/0.log" Feb 16 11:07:10 crc kubenswrapper[4814]: I0216 11:07:10.445852 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d64fe4ad-1b8d-4f94-b825-675bb6bd7f89/prometheus/0.log" Feb 16 11:07:10 crc kubenswrapper[4814]: I0216 11:07:10.466987 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d64fe4ad-1b8d-4f94-b825-675bb6bd7f89/init-config-reloader/0.log" Feb 16 11:07:10 crc kubenswrapper[4814]: I0216 11:07:10.501029 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d64fe4ad-1b8d-4f94-b825-675bb6bd7f89/config-reloader/0.log" Feb 16 11:07:10 crc kubenswrapper[4814]: I0216 11:07:10.514320 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_d64fe4ad-1b8d-4f94-b825-675bb6bd7f89/thanos-sidecar/0.log" Feb 16 11:07:10 crc kubenswrapper[4814]: I0216 11:07:10.697315 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_19661670-37f9-4577-93d4-cd87303f3008/setup-container/0.log" Feb 16 11:07:10 crc kubenswrapper[4814]: I0216 11:07:10.934343 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_19661670-37f9-4577-93d4-cd87303f3008/setup-container/0.log" Feb 16 11:07:10 crc kubenswrapper[4814]: I0216 11:07:10.953896 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b4e759af-f091-47c0-accc-c68b45b277fa/setup-container/0.log" Feb 16 11:07:10 crc kubenswrapper[4814]: I0216 11:07:10.959230 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_19661670-37f9-4577-93d4-cd87303f3008/rabbitmq/0.log" Feb 16 11:07:11 crc kubenswrapper[4814]: I0216 11:07:11.125117 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b4e759af-f091-47c0-accc-c68b45b277fa/setup-container/0.log" Feb 16 11:07:11 crc kubenswrapper[4814]: I0216 11:07:11.224343 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b4e759af-f091-47c0-accc-c68b45b277fa/rabbitmq/0.log" Feb 16 11:07:11 crc kubenswrapper[4814]: I0216 11:07:11.402310 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-bf98696f9-fcvdv_2818c738-cd93-486f-8b95-3e0c60ec8b59/proxy-server/0.log" Feb 16 11:07:11 crc kubenswrapper[4814]: I0216 11:07:11.411395 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-bf98696f9-fcvdv_2818c738-cd93-486f-8b95-3e0c60ec8b59/proxy-httpd/0.log" Feb 16 11:07:11 crc kubenswrapper[4814]: I0216 11:07:11.442667 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-68zpk_f89153bb-4a9e-419a-b142-b339a0797d78/swift-ring-rebalance/0.log" Feb 16 11:07:11 crc kubenswrapper[4814]: I0216 11:07:11.596589 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/account-auditor/0.log" Feb 16 11:07:11 crc kubenswrapper[4814]: I0216 11:07:11.640789 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/account-reaper/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.331800 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/container-auditor/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.333853 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/account-server/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.368688 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/account-replicator/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.407327 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/container-replicator/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.525514 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/container-server/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.535094 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/container-updater/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.623329 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/object-auditor/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.679446 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/object-expirer/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.750726 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/object-server/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.767685 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/object-replicator/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.892490 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/rsync/0.log" Feb 16 11:07:12 crc kubenswrapper[4814]: I0216 11:07:12.897431 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/object-updater/0.log" Feb 16 11:07:13 crc kubenswrapper[4814]: I0216 11:07:13.030414 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_33b56eb5-3fe6-4c32-9ddd-13eb56ef8b36/swift-recon-cron/0.log" Feb 16 11:07:13 crc kubenswrapper[4814]: I0216 11:07:13.264513 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_20340e82-7a4f-4828-affb-85843eca8f6c/watcher-api-log/0.log" Feb 16 11:07:13 crc kubenswrapper[4814]: I0216 11:07:13.448634 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_f0c7e897-72d7-41c0-a0ad-5cbd8f2c4af2/watcher-applier/0.log" Feb 16 11:07:14 crc kubenswrapper[4814]: I0216 11:07:14.051220 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_2b889a9b-aa4c-4e93-92f7-b37c7e86838b/watcher-decision-engine/0.log" Feb 16 11:07:15 crc kubenswrapper[4814]: I0216 11:07:15.690376 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_20340e82-7a4f-4828-affb-85843eca8f6c/watcher-api/0.log" Feb 16 11:07:17 crc kubenswrapper[4814]: I0216 11:07:17.993865 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:07:17 crc kubenswrapper[4814]: E0216 11:07:17.994441 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:07:22 crc kubenswrapper[4814]: I0216 11:07:22.500482 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_5fdd7785-aaf8-4454-b063-9723065293b7/memcached/0.log" Feb 16 11:07:28 crc kubenswrapper[4814]: I0216 11:07:28.994769 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:07:28 crc kubenswrapper[4814]: E0216 11:07:28.996024 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:07:41 crc kubenswrapper[4814]: I0216 11:07:41.993933 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:07:41 crc kubenswrapper[4814]: E0216 11:07:41.994599 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:07:43 crc kubenswrapper[4814]: I0216 11:07:43.552273 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7_ade71140-7224-44bb-bf6d-a15f0af16718/util/0.log" Feb 16 11:07:43 crc kubenswrapper[4814]: I0216 11:07:43.770473 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7_ade71140-7224-44bb-bf6d-a15f0af16718/pull/0.log" Feb 16 11:07:43 crc kubenswrapper[4814]: I0216 11:07:43.793230 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7_ade71140-7224-44bb-bf6d-a15f0af16718/util/0.log" Feb 16 11:07:44 crc kubenswrapper[4814]: I0216 11:07:44.006045 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7_ade71140-7224-44bb-bf6d-a15f0af16718/pull/0.log" Feb 16 11:07:44 crc kubenswrapper[4814]: I0216 11:07:44.199827 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7_ade71140-7224-44bb-bf6d-a15f0af16718/pull/0.log" Feb 16 11:07:44 crc kubenswrapper[4814]: I0216 11:07:44.206602 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7_ade71140-7224-44bb-bf6d-a15f0af16718/util/0.log" Feb 16 11:07:44 crc kubenswrapper[4814]: I0216 11:07:44.432889 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dec99bfa9ff6bd2027603865b62cf5c3e3be3c74628f894259cfb8cc5e4fwn7_ade71140-7224-44bb-bf6d-a15f0af16718/extract/0.log" Feb 16 11:07:44 crc kubenswrapper[4814]: I0216 11:07:44.645119 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-9ltsr_2ffba7b1-f1c7-4422-bbd2-240022e594a9/manager/0.log" Feb 16 11:07:45 crc kubenswrapper[4814]: I0216 11:07:45.067815 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-kmskc_5dce01de-2987-428e-8e82-916685ec38d0/manager/0.log" Feb 16 11:07:45 crc kubenswrapper[4814]: I0216 11:07:45.243472 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-mrqpp_e763fa22-f350-4b3c-930e-f115981b2cd5/manager/0.log" Feb 16 11:07:45 crc kubenswrapper[4814]: I0216 11:07:45.546681 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-dl9md_2d17d4ba-3b70-4b99-808c-a9fb764754a4/manager/0.log" Feb 16 11:07:45 crc kubenswrapper[4814]: I0216 11:07:45.977327 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-mscb9_d6383f25-e9d4-4606-aa4a-fd1ed2b9299c/manager/0.log" Feb 16 11:07:46 crc kubenswrapper[4814]: I0216 11:07:46.137183 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-shv45_2e5a39bc-3922-4dfd-b2c9-5ff4ebbeeb74/manager/0.log" Feb 16 11:07:46 crc kubenswrapper[4814]: I0216 11:07:46.304555 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-5fwts_cd61e4fa-ce01-4597-9f4c-e90419b3c582/manager/0.log" Feb 16 11:07:46 crc kubenswrapper[4814]: I0216 11:07:46.452747 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-f6jgb_7282bc18-ffbd-4680-abb9-40dbe56ad895/manager/0.log" Feb 16 11:07:46 crc kubenswrapper[4814]: I0216 11:07:46.581700 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-h5w4b_0808e383-92fc-4af4-82c1-7324a6729e7a/manager/0.log" Feb 16 11:07:46 crc kubenswrapper[4814]: I0216 11:07:46.745979 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-wv8lv_e720ed93-e990-4508-ad82-cd7c7d097e9c/manager/0.log" Feb 16 11:07:47 crc kubenswrapper[4814]: I0216 11:07:47.050723 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-qstdq_aaa14470-c664-49a4-88f4-d48c9c2f7eda/manager/0.log" Feb 16 11:07:47 crc kubenswrapper[4814]: I0216 11:07:47.189487 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-qbxxf_27612122-6b3e-468c-9050-ff180e9212d8/manager/0.log" Feb 16 11:07:47 crc kubenswrapper[4814]: I0216 11:07:47.373736 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cd6plz_a9e0b3a6-0817-4c54-acf5-11145e9e0dab/manager/0.log" Feb 16 11:07:47 crc kubenswrapper[4814]: I0216 11:07:47.762188 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-68c79ff849-568kl_64658bf5-6ea3-4442-a3f1-fe3b1e2fdace/operator/0.log" Feb 16 11:07:48 crc kubenswrapper[4814]: I0216 11:07:48.016921 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-txc7s_3f66b0c7-ba80-4484-b02a-07159181c1f2/registry-server/0.log" Feb 16 11:07:48 crc kubenswrapper[4814]: I0216 11:07:48.482448 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-sl5wn_fea081c6-407f-4dd4-958f-0d567d0df233/manager/0.log" Feb 16 11:07:48 crc kubenswrapper[4814]: I0216 11:07:48.799014 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-rtsgp_e9d0d20b-f520-4a52-93d5-02fa13273625/manager/0.log" Feb 16 11:07:49 crc kubenswrapper[4814]: I0216 11:07:49.103006 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6lhcl_1bfe3197-5fa8-47ab-9361-8c0f7d6b5b1a/operator/0.log" Feb 16 11:07:49 crc kubenswrapper[4814]: I0216 11:07:49.191022 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5c6596c9fc-2tsm2_c2b42d7c-69c1-4052-910f-a174001cc739/manager/0.log" Feb 16 11:07:49 crc kubenswrapper[4814]: I0216 11:07:49.364777 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-5pd8h_12f8611d-0069-4ea0-a926-3f7c34ac5424/manager/0.log" Feb 16 11:07:49 crc kubenswrapper[4814]: I0216 11:07:49.598698 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-7dl25_3a2d26bf-3be8-48a8-845d-ea10f5196876/manager/0.log" Feb 16 11:07:49 crc kubenswrapper[4814]: I0216 11:07:49.709409 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-wh9lm_0e3cc780-e5be-4808-b9c3-d514994ce8cb/manager/0.log" Feb 16 11:07:50 crc kubenswrapper[4814]: I0216 11:07:50.148050 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7787dfc59c-cx6k2_c436a9b9-dacb-4c82-b799-117453b8c695/manager/0.log" Feb 16 11:07:50 crc kubenswrapper[4814]: I0216 11:07:50.280039 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-f9l2v_57a9e823-2475-4a15-9ac0-1cd8b4f0197c/manager/0.log" Feb 16 11:07:52 crc kubenswrapper[4814]: I0216 11:07:52.999356 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:07:53 crc kubenswrapper[4814]: E0216 11:07:52.999895 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:07:53 crc kubenswrapper[4814]: I0216 11:07:53.839630 4814 scope.go:117] "RemoveContainer" containerID="ca1bdea9bf262de34a158d302a49042e14c81556884fd3ff80f45d8de83fbf47" Feb 16 11:07:53 crc kubenswrapper[4814]: I0216 11:07:53.893414 4814 scope.go:117] "RemoveContainer" containerID="5372607a620024074210093437dcf69012cdc56abbeae63a58c6a9786982a8ce" Feb 16 11:07:53 crc kubenswrapper[4814]: I0216 11:07:53.942564 4814 scope.go:117] "RemoveContainer" containerID="59dda2b010e87e55c898053ce276bb7a992f153f5bb8c0fa0951406b1229e495" Feb 16 11:07:55 crc kubenswrapper[4814]: I0216 11:07:55.132874 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-ndn8x_96b8a99b-83ce-4d62-b471-a8bcc47aa67a/manager/0.log" Feb 16 11:08:05 crc kubenswrapper[4814]: I0216 11:08:05.994247 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:08:05 crc kubenswrapper[4814]: E0216 11:08:05.995081 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:08:13 crc kubenswrapper[4814]: I0216 11:08:13.942775 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-drcdg_73fb725a-9a40-4283-8e3e-296294a08655/control-plane-machine-set-operator/0.log" Feb 16 11:08:14 crc kubenswrapper[4814]: I0216 11:08:14.160277 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4p95d_b3d36256-4e8e-460d-ad98-eaaafbb76021/machine-api-operator/0.log" Feb 16 11:08:14 crc kubenswrapper[4814]: I0216 11:08:14.177504 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4p95d_b3d36256-4e8e-460d-ad98-eaaafbb76021/kube-rbac-proxy/0.log" Feb 16 11:08:19 crc kubenswrapper[4814]: I0216 11:08:19.993777 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:08:19 crc kubenswrapper[4814]: E0216 11:08:19.994517 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:08:29 crc kubenswrapper[4814]: I0216 11:08:29.300002 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-f8z96_ca18adc5-a900-4bcd-ad7c-6bcb7d4c2331/cert-manager-controller/0.log" Feb 16 11:08:29 crc kubenswrapper[4814]: I0216 11:08:29.476184 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-9w2f7_8ced6e31-bb91-4c18-a157-2daa6ca09a74/cert-manager-cainjector/0.log" Feb 16 11:08:29 crc kubenswrapper[4814]: I0216 11:08:29.592930 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-fxwzc_94e88850-618a-40c9-85a0-6813e57e7715/cert-manager-webhook/0.log" Feb 16 11:08:34 crc kubenswrapper[4814]: I0216 11:08:34.994358 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:08:34 crc kubenswrapper[4814]: E0216 11:08:34.995297 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:08:43 crc kubenswrapper[4814]: I0216 11:08:43.304000 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-nn582_4b6baf37-55ba-48ef-bae6-c74b2f647956/nmstate-console-plugin/0.log" Feb 16 11:08:43 crc kubenswrapper[4814]: I0216 11:08:43.513633 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lvh27_4d8e1bb4-3c1d-43e2-b165-83e51d57ebb1/nmstate-handler/0.log" Feb 16 11:08:43 crc kubenswrapper[4814]: I0216 11:08:43.579045 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-c4tzr_6959ecae-2538-428c-956d-edf875e58947/kube-rbac-proxy/0.log" Feb 16 11:08:43 crc kubenswrapper[4814]: I0216 11:08:43.750206 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-c4tzr_6959ecae-2538-428c-956d-edf875e58947/nmstate-metrics/0.log" Feb 16 11:08:43 crc kubenswrapper[4814]: I0216 11:08:43.815253 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-nccwr_6dac0f48-5703-4178-b06c-51edae8f0735/nmstate-operator/0.log" Feb 16 11:08:43 crc kubenswrapper[4814]: I0216 11:08:43.944119 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-fbxdv_4dc40630-922d-4c2a-b61b-2dc11a8aa9fd/nmstate-webhook/0.log" Feb 16 11:08:47 crc kubenswrapper[4814]: I0216 11:08:47.993774 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:08:47 crc kubenswrapper[4814]: E0216 11:08:47.994634 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:08:58 crc kubenswrapper[4814]: I0216 11:08:58.906467 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-85xhn_7b1e81f6-bcc5-439b-845d-d7f11f18a3ca/prometheus-operator/0.log" Feb 16 11:08:59 crc kubenswrapper[4814]: I0216 11:08:59.059824 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8_44a93ef4-16c4-482f-a103-bfed7099ab40/prometheus-operator-admission-webhook/0.log" Feb 16 11:08:59 crc kubenswrapper[4814]: I0216 11:08:59.127220 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5fff667df6-zrc42_1674b66d-5eb2-4f20-853b-d7321fe6194c/prometheus-operator-admission-webhook/0.log" Feb 16 11:08:59 crc kubenswrapper[4814]: I0216 11:08:59.254905 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-ww9s6_633edb4f-6c36-408b-bd22-3930c2112c90/operator/0.log" Feb 16 11:08:59 crc kubenswrapper[4814]: I0216 11:08:59.319687 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7cc86_5998ae63-01b5-4762-9606-6b5a3f091b5c/perses-operator/0.log" Feb 16 11:08:59 crc kubenswrapper[4814]: I0216 11:08:59.993915 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:08:59 crc kubenswrapper[4814]: E0216 11:08:59.994403 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.249655 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p78ln"] Feb 16 11:09:07 crc kubenswrapper[4814]: E0216 11:09:07.250754 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457c21de-be23-4c53-9c5e-9cf705f83d53" containerName="extract-content" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.250773 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="457c21de-be23-4c53-9c5e-9cf705f83d53" containerName="extract-content" Feb 16 11:09:07 crc kubenswrapper[4814]: E0216 11:09:07.250794 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457c21de-be23-4c53-9c5e-9cf705f83d53" containerName="registry-server" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.250801 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="457c21de-be23-4c53-9c5e-9cf705f83d53" containerName="registry-server" Feb 16 11:09:07 crc kubenswrapper[4814]: E0216 11:09:07.250830 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457c21de-be23-4c53-9c5e-9cf705f83d53" containerName="extract-utilities" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.250838 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="457c21de-be23-4c53-9c5e-9cf705f83d53" containerName="extract-utilities" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.251058 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="457c21de-be23-4c53-9c5e-9cf705f83d53" containerName="registry-server" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.252733 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.257546 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p78ln"] Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.386950 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxpq\" (UniqueName: \"kubernetes.io/projected/22af591d-6e88-4acb-8838-490d0afe88f6-kube-api-access-njxpq\") pod \"community-operators-p78ln\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.387336 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-utilities\") pod \"community-operators-p78ln\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.387465 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-catalog-content\") pod \"community-operators-p78ln\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.489051 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-catalog-content\") pod \"community-operators-p78ln\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.489187 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njxpq\" (UniqueName: \"kubernetes.io/projected/22af591d-6e88-4acb-8838-490d0afe88f6-kube-api-access-njxpq\") pod \"community-operators-p78ln\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.489276 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-utilities\") pod \"community-operators-p78ln\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.489703 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-utilities\") pod \"community-operators-p78ln\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.490283 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-catalog-content\") pod \"community-operators-p78ln\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.517180 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njxpq\" (UniqueName: \"kubernetes.io/projected/22af591d-6e88-4acb-8838-490d0afe88f6-kube-api-access-njxpq\") pod \"community-operators-p78ln\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.582953 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.959909 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:09:07 crc kubenswrapper[4814]: I0216 11:09:07.960177 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:09:08 crc kubenswrapper[4814]: I0216 11:09:08.090700 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p78ln"] Feb 16 11:09:08 crc kubenswrapper[4814]: I0216 11:09:08.930072 4814 generic.go:334] "Generic (PLEG): container finished" podID="22af591d-6e88-4acb-8838-490d0afe88f6" containerID="3be6110a8dd07af3954592e19c011426c8a9bfe7f8599b62dffeb1142b2cff1a" exitCode=0 Feb 16 11:09:08 crc kubenswrapper[4814]: I0216 11:09:08.930127 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p78ln" event={"ID":"22af591d-6e88-4acb-8838-490d0afe88f6","Type":"ContainerDied","Data":"3be6110a8dd07af3954592e19c011426c8a9bfe7f8599b62dffeb1142b2cff1a"} Feb 16 11:09:08 crc kubenswrapper[4814]: I0216 11:09:08.933879 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p78ln" event={"ID":"22af591d-6e88-4acb-8838-490d0afe88f6","Type":"ContainerStarted","Data":"9370620e56726b1a5aff431e5d765d6f44fbad31fca8928eecdd446dc3b69442"} Feb 16 11:09:09 crc kubenswrapper[4814]: I0216 11:09:09.950773 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p78ln" event={"ID":"22af591d-6e88-4acb-8838-490d0afe88f6","Type":"ContainerStarted","Data":"56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20"} Feb 16 11:09:10 crc kubenswrapper[4814]: I0216 11:09:10.965009 4814 generic.go:334] "Generic (PLEG): container finished" podID="22af591d-6e88-4acb-8838-490d0afe88f6" containerID="56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20" exitCode=0 Feb 16 11:09:10 crc kubenswrapper[4814]: I0216 11:09:10.965078 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p78ln" event={"ID":"22af591d-6e88-4acb-8838-490d0afe88f6","Type":"ContainerDied","Data":"56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20"} Feb 16 11:09:11 crc kubenswrapper[4814]: I0216 11:09:11.978924 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p78ln" event={"ID":"22af591d-6e88-4acb-8838-490d0afe88f6","Type":"ContainerStarted","Data":"3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107"} Feb 16 11:09:12 crc kubenswrapper[4814]: I0216 11:09:12.007205 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p78ln" podStartSLOduration=2.3564726719999998 podStartE2EDuration="5.007187103s" podCreationTimestamp="2026-02-16 11:09:07 +0000 UTC" firstStartedPulling="2026-02-16 11:09:08.932273848 +0000 UTC m=+5006.625430038" lastFinishedPulling="2026-02-16 11:09:11.582988279 +0000 UTC m=+5009.276144469" observedRunningTime="2026-02-16 11:09:12.002864025 +0000 UTC m=+5009.696020245" watchObservedRunningTime="2026-02-16 11:09:12.007187103 +0000 UTC m=+5009.700343293" Feb 16 11:09:12 crc kubenswrapper[4814]: I0216 11:09:12.998806 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:09:13 crc kubenswrapper[4814]: E0216 11:09:12.999304 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:09:14 crc kubenswrapper[4814]: I0216 11:09:14.348909 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-4vfps_3e127231-de8b-4ee9-9bae-8cefb19310a0/kube-rbac-proxy/0.log" Feb 16 11:09:14 crc kubenswrapper[4814]: I0216 11:09:14.581619 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-4vfps_3e127231-de8b-4ee9-9bae-8cefb19310a0/controller/0.log" Feb 16 11:09:14 crc kubenswrapper[4814]: I0216 11:09:14.951611 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-frr-files/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.214449 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-reloader/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.223288 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-frr-files/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.239052 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-reloader/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.281949 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-metrics/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.395330 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-frr-files/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.436981 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-reloader/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.467259 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-metrics/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.498195 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-metrics/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.759419 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-reloader/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.777267 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-frr-files/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.777876 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/controller/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.826644 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/cp-metrics/0.log" Feb 16 11:09:15 crc kubenswrapper[4814]: I0216 11:09:15.968407 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/frr-metrics/0.log" Feb 16 11:09:16 crc kubenswrapper[4814]: I0216 11:09:16.021110 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/kube-rbac-proxy/0.log" Feb 16 11:09:16 crc kubenswrapper[4814]: I0216 11:09:16.086847 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/kube-rbac-proxy-frr/0.log" Feb 16 11:09:16 crc kubenswrapper[4814]: I0216 11:09:16.209090 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/reloader/0.log" Feb 16 11:09:16 crc kubenswrapper[4814]: I0216 11:09:16.354162 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-b86zg_3a69ba3b-0b8a-4c6c-93c1-edfdd29e2573/frr-k8s-webhook-server/0.log" Feb 16 11:09:16 crc kubenswrapper[4814]: I0216 11:09:16.526305 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6f6878f94-mgj6r_4f9ef0e1-d42f-4c53-b61f-ac0fc2bcea81/manager/0.log" Feb 16 11:09:16 crc kubenswrapper[4814]: I0216 11:09:16.667757 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6f6c7df7cb-8rpbf_6a1b6a4d-7919-4cd7-bb65-bca5b645379f/webhook-server/0.log" Feb 16 11:09:16 crc kubenswrapper[4814]: I0216 11:09:16.839556 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-55g6q_d901565c-c77f-4940-aa1c-bc148ed6cb2b/kube-rbac-proxy/0.log" Feb 16 11:09:17 crc kubenswrapper[4814]: I0216 11:09:17.446731 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-55g6q_d901565c-c77f-4940-aa1c-bc148ed6cb2b/speaker/0.log" Feb 16 11:09:17 crc kubenswrapper[4814]: I0216 11:09:17.583842 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:17 crc kubenswrapper[4814]: I0216 11:09:17.583899 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:17 crc kubenswrapper[4814]: I0216 11:09:17.642370 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:17 crc kubenswrapper[4814]: I0216 11:09:17.764110 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b965k_5b42fe8a-c4e7-48ca-97a1-6739547d284f/frr/0.log" Feb 16 11:09:18 crc kubenswrapper[4814]: I0216 11:09:18.091677 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:18 crc kubenswrapper[4814]: I0216 11:09:18.146274 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p78ln"] Feb 16 11:09:20 crc kubenswrapper[4814]: I0216 11:09:20.066087 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p78ln" podUID="22af591d-6e88-4acb-8838-490d0afe88f6" containerName="registry-server" containerID="cri-o://3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107" gracePeriod=2 Feb 16 11:09:20 crc kubenswrapper[4814]: I0216 11:09:20.595746 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:20 crc kubenswrapper[4814]: I0216 11:09:20.747683 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njxpq\" (UniqueName: \"kubernetes.io/projected/22af591d-6e88-4acb-8838-490d0afe88f6-kube-api-access-njxpq\") pod \"22af591d-6e88-4acb-8838-490d0afe88f6\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " Feb 16 11:09:20 crc kubenswrapper[4814]: I0216 11:09:20.747768 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-utilities\") pod \"22af591d-6e88-4acb-8838-490d0afe88f6\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " Feb 16 11:09:20 crc kubenswrapper[4814]: I0216 11:09:20.747867 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-catalog-content\") pod \"22af591d-6e88-4acb-8838-490d0afe88f6\" (UID: \"22af591d-6e88-4acb-8838-490d0afe88f6\") " Feb 16 11:09:20 crc kubenswrapper[4814]: I0216 11:09:20.749889 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-utilities" (OuterVolumeSpecName: "utilities") pod "22af591d-6e88-4acb-8838-490d0afe88f6" (UID: "22af591d-6e88-4acb-8838-490d0afe88f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:09:20 crc kubenswrapper[4814]: I0216 11:09:20.764167 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22af591d-6e88-4acb-8838-490d0afe88f6-kube-api-access-njxpq" (OuterVolumeSpecName: "kube-api-access-njxpq") pod "22af591d-6e88-4acb-8838-490d0afe88f6" (UID: "22af591d-6e88-4acb-8838-490d0afe88f6"). InnerVolumeSpecName "kube-api-access-njxpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:09:20 crc kubenswrapper[4814]: I0216 11:09:20.807716 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22af591d-6e88-4acb-8838-490d0afe88f6" (UID: "22af591d-6e88-4acb-8838-490d0afe88f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:09:20 crc kubenswrapper[4814]: I0216 11:09:20.850592 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:09:20 crc kubenswrapper[4814]: I0216 11:09:20.850629 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22af591d-6e88-4acb-8838-490d0afe88f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:09:20 crc kubenswrapper[4814]: I0216 11:09:20.850641 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njxpq\" (UniqueName: \"kubernetes.io/projected/22af591d-6e88-4acb-8838-490d0afe88f6-kube-api-access-njxpq\") on node \"crc\" DevicePath \"\"" Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.076436 4814 generic.go:334] "Generic (PLEG): container finished" podID="22af591d-6e88-4acb-8838-490d0afe88f6" containerID="3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107" exitCode=0 Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.076593 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p78ln" event={"ID":"22af591d-6e88-4acb-8838-490d0afe88f6","Type":"ContainerDied","Data":"3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107"} Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.077672 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p78ln" event={"ID":"22af591d-6e88-4acb-8838-490d0afe88f6","Type":"ContainerDied","Data":"9370620e56726b1a5aff431e5d765d6f44fbad31fca8928eecdd446dc3b69442"} Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.077780 4814 scope.go:117] "RemoveContainer" containerID="3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107" Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.076680 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p78ln" Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.101081 4814 scope.go:117] "RemoveContainer" containerID="56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20" Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.107789 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p78ln"] Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.115575 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p78ln"] Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.599469 4814 scope.go:117] "RemoveContainer" containerID="3be6110a8dd07af3954592e19c011426c8a9bfe7f8599b62dffeb1142b2cff1a" Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.642887 4814 scope.go:117] "RemoveContainer" containerID="3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107" Feb 16 11:09:21 crc kubenswrapper[4814]: E0216 11:09:21.646261 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107\": container with ID starting with 3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107 not found: ID does not exist" containerID="3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107" Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.646304 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107"} err="failed to get container status \"3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107\": rpc error: code = NotFound desc = could not find container \"3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107\": container with ID starting with 3833e209e619490b95c0eaf75e746a4cbe84a5ca574c89a9b448976ff3fad107 not found: ID does not exist" Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.646331 4814 scope.go:117] "RemoveContainer" containerID="56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20" Feb 16 11:09:21 crc kubenswrapper[4814]: E0216 11:09:21.647876 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20\": container with ID starting with 56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20 not found: ID does not exist" containerID="56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20" Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.647906 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20"} err="failed to get container status \"56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20\": rpc error: code = NotFound desc = could not find container \"56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20\": container with ID starting with 56b51f33c24e5564d28b0c71b3a65452a47b246a9b4f3b60b09012d61a16ab20 not found: ID does not exist" Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.647926 4814 scope.go:117] "RemoveContainer" containerID="3be6110a8dd07af3954592e19c011426c8a9bfe7f8599b62dffeb1142b2cff1a" Feb 16 11:09:21 crc kubenswrapper[4814]: E0216 11:09:21.648263 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be6110a8dd07af3954592e19c011426c8a9bfe7f8599b62dffeb1142b2cff1a\": container with ID starting with 3be6110a8dd07af3954592e19c011426c8a9bfe7f8599b62dffeb1142b2cff1a not found: ID does not exist" containerID="3be6110a8dd07af3954592e19c011426c8a9bfe7f8599b62dffeb1142b2cff1a" Feb 16 11:09:21 crc kubenswrapper[4814]: I0216 11:09:21.648402 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be6110a8dd07af3954592e19c011426c8a9bfe7f8599b62dffeb1142b2cff1a"} err="failed to get container status \"3be6110a8dd07af3954592e19c011426c8a9bfe7f8599b62dffeb1142b2cff1a\": rpc error: code = NotFound desc = could not find container \"3be6110a8dd07af3954592e19c011426c8a9bfe7f8599b62dffeb1142b2cff1a\": container with ID starting with 3be6110a8dd07af3954592e19c011426c8a9bfe7f8599b62dffeb1142b2cff1a not found: ID does not exist" Feb 16 11:09:23 crc kubenswrapper[4814]: I0216 11:09:23.011460 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22af591d-6e88-4acb-8838-490d0afe88f6" path="/var/lib/kubelet/pods/22af591d-6e88-4acb-8838-490d0afe88f6/volumes" Feb 16 11:09:26 crc kubenswrapper[4814]: I0216 11:09:26.994098 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:09:26 crc kubenswrapper[4814]: E0216 11:09:26.994947 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:09:32 crc kubenswrapper[4814]: I0216 11:09:32.076185 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x_7e703789-c69e-4376-a513-cd7b042c66b4/util/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.036199 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x_7e703789-c69e-4376-a513-cd7b042c66b4/pull/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.047339 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x_7e703789-c69e-4376-a513-cd7b042c66b4/util/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.087810 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x_7e703789-c69e-4376-a513-cd7b042c66b4/pull/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.275336 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x_7e703789-c69e-4376-a513-cd7b042c66b4/util/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.275838 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x_7e703789-c69e-4376-a513-cd7b042c66b4/extract/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.282138 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08fcl5x_7e703789-c69e-4376-a513-cd7b042c66b4/pull/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.432092 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj_6a251c74-29fa-41ea-8f69-5cad14030a5f/util/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.568273 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj_6a251c74-29fa-41ea-8f69-5cad14030a5f/util/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.605669 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj_6a251c74-29fa-41ea-8f69-5cad14030a5f/pull/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.613851 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj_6a251c74-29fa-41ea-8f69-5cad14030a5f/pull/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.836162 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj_6a251c74-29fa-41ea-8f69-5cad14030a5f/extract/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.860931 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj_6a251c74-29fa-41ea-8f69-5cad14030a5f/util/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.873350 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213k9lgj_6a251c74-29fa-41ea-8f69-5cad14030a5f/pull/0.log" Feb 16 11:09:33 crc kubenswrapper[4814]: I0216 11:09:33.992043 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j5d9z_690c572b-3be5-4f1d-bb8b-c618d3e9e6d5/extract-utilities/0.log" Feb 16 11:09:34 crc kubenswrapper[4814]: I0216 11:09:34.166149 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j5d9z_690c572b-3be5-4f1d-bb8b-c618d3e9e6d5/extract-utilities/0.log" Feb 16 11:09:34 crc kubenswrapper[4814]: I0216 11:09:34.212964 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j5d9z_690c572b-3be5-4f1d-bb8b-c618d3e9e6d5/extract-content/0.log" Feb 16 11:09:34 crc kubenswrapper[4814]: I0216 11:09:34.215714 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j5d9z_690c572b-3be5-4f1d-bb8b-c618d3e9e6d5/extract-content/0.log" Feb 16 11:09:34 crc kubenswrapper[4814]: I0216 11:09:34.370918 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j5d9z_690c572b-3be5-4f1d-bb8b-c618d3e9e6d5/extract-content/0.log" Feb 16 11:09:34 crc kubenswrapper[4814]: I0216 11:09:34.371712 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j5d9z_690c572b-3be5-4f1d-bb8b-c618d3e9e6d5/extract-utilities/0.log" Feb 16 11:09:34 crc kubenswrapper[4814]: I0216 11:09:34.597713 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-thm8v_4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5/extract-utilities/0.log" Feb 16 11:09:34 crc kubenswrapper[4814]: I0216 11:09:34.883699 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-thm8v_4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5/extract-utilities/0.log" Feb 16 11:09:34 crc kubenswrapper[4814]: I0216 11:09:34.904777 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-thm8v_4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5/extract-content/0.log" Feb 16 11:09:34 crc kubenswrapper[4814]: I0216 11:09:34.959829 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-thm8v_4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5/extract-content/0.log" Feb 16 11:09:35 crc kubenswrapper[4814]: I0216 11:09:35.034008 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j5d9z_690c572b-3be5-4f1d-bb8b-c618d3e9e6d5/registry-server/0.log" Feb 16 11:09:35 crc kubenswrapper[4814]: I0216 11:09:35.128101 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-thm8v_4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5/extract-utilities/0.log" Feb 16 11:09:35 crc kubenswrapper[4814]: I0216 11:09:35.142358 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-thm8v_4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5/extract-content/0.log" Feb 16 11:09:35 crc kubenswrapper[4814]: I0216 11:09:35.400729 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576_cf16c7fb-2e89-4cc7-b19f-6ac91d078db5/util/0.log" Feb 16 11:09:35 crc kubenswrapper[4814]: I0216 11:09:35.564859 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576_cf16c7fb-2e89-4cc7-b19f-6ac91d078db5/pull/0.log" Feb 16 11:09:35 crc kubenswrapper[4814]: I0216 11:09:35.565576 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576_cf16c7fb-2e89-4cc7-b19f-6ac91d078db5/pull/0.log" Feb 16 11:09:35 crc kubenswrapper[4814]: I0216 11:09:35.608820 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576_cf16c7fb-2e89-4cc7-b19f-6ac91d078db5/util/0.log" Feb 16 11:09:35 crc kubenswrapper[4814]: I0216 11:09:35.768868 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576_cf16c7fb-2e89-4cc7-b19f-6ac91d078db5/pull/0.log" Feb 16 11:09:35 crc kubenswrapper[4814]: I0216 11:09:35.812636 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576_cf16c7fb-2e89-4cc7-b19f-6ac91d078db5/util/0.log" Feb 16 11:09:35 crc kubenswrapper[4814]: I0216 11:09:35.847862 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah6576_cf16c7fb-2e89-4cc7-b19f-6ac91d078db5/extract/0.log" Feb 16 11:09:35 crc kubenswrapper[4814]: I0216 11:09:35.883523 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-thm8v_4aed1b37-4a4f-4684-9a3d-ccdfe4efe6e5/registry-server/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.008829 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s6k7j_ad487dcb-3042-4cfe-abe7-0c9df7cc212c/extract-utilities/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.021465 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-qrxcn_6a2f8066-0e53-4f49-ad72-83d1569a8bd4/marketplace-operator/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.169406 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s6k7j_ad487dcb-3042-4cfe-abe7-0c9df7cc212c/extract-utilities/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.175281 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s6k7j_ad487dcb-3042-4cfe-abe7-0c9df7cc212c/extract-content/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.204111 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s6k7j_ad487dcb-3042-4cfe-abe7-0c9df7cc212c/extract-content/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.387500 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s6k7j_ad487dcb-3042-4cfe-abe7-0c9df7cc212c/extract-content/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.397569 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s6k7j_ad487dcb-3042-4cfe-abe7-0c9df7cc212c/extract-utilities/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.442709 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mxvf2_5eb190c6-74c7-4b35-b748-ece1660772f1/extract-utilities/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.557840 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s6k7j_ad487dcb-3042-4cfe-abe7-0c9df7cc212c/registry-server/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.620830 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mxvf2_5eb190c6-74c7-4b35-b748-ece1660772f1/extract-utilities/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.629099 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mxvf2_5eb190c6-74c7-4b35-b748-ece1660772f1/extract-content/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.683093 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mxvf2_5eb190c6-74c7-4b35-b748-ece1660772f1/extract-content/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.863373 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mxvf2_5eb190c6-74c7-4b35-b748-ece1660772f1/extract-content/0.log" Feb 16 11:09:36 crc kubenswrapper[4814]: I0216 11:09:36.869418 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mxvf2_5eb190c6-74c7-4b35-b748-ece1660772f1/extract-utilities/0.log" Feb 16 11:09:37 crc kubenswrapper[4814]: I0216 11:09:37.577403 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mxvf2_5eb190c6-74c7-4b35-b748-ece1660772f1/registry-server/0.log" Feb 16 11:09:37 crc kubenswrapper[4814]: I0216 11:09:37.960260 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:09:37 crc kubenswrapper[4814]: I0216 11:09:37.960326 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:09:40 crc kubenswrapper[4814]: I0216 11:09:40.994032 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:09:40 crc kubenswrapper[4814]: E0216 11:09:40.994802 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:09:50 crc kubenswrapper[4814]: I0216 11:09:50.823742 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5fff667df6-9lhq8_44a93ef4-16c4-482f-a103-bfed7099ab40/prometheus-operator-admission-webhook/0.log" Feb 16 11:09:50 crc kubenswrapper[4814]: I0216 11:09:50.874055 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-85xhn_7b1e81f6-bcc5-439b-845d-d7f11f18a3ca/prometheus-operator/0.log" Feb 16 11:09:50 crc kubenswrapper[4814]: I0216 11:09:50.891671 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5fff667df6-zrc42_1674b66d-5eb2-4f20-853b-d7321fe6194c/prometheus-operator-admission-webhook/0.log" Feb 16 11:09:51 crc kubenswrapper[4814]: I0216 11:09:51.037855 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7cc86_5998ae63-01b5-4762-9606-6b5a3f091b5c/perses-operator/0.log" Feb 16 11:09:51 crc kubenswrapper[4814]: I0216 11:09:51.067991 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-ww9s6_633edb4f-6c36-408b-bd22-3930c2112c90/operator/0.log" Feb 16 11:09:53 crc kubenswrapper[4814]: I0216 11:09:53.008437 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:09:53 crc kubenswrapper[4814]: E0216 11:09:53.011443 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:10:03 crc kubenswrapper[4814]: I0216 11:10:03.993431 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:10:03 crc kubenswrapper[4814]: E0216 11:10:03.995367 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:10:07 crc kubenswrapper[4814]: I0216 11:10:07.959778 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:10:07 crc kubenswrapper[4814]: I0216 11:10:07.960314 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:10:07 crc kubenswrapper[4814]: I0216 11:10:07.960362 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 11:10:07 crc kubenswrapper[4814]: I0216 11:10:07.961163 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b750c07fe9ca28e7f9f87514229ecb4c19ab370da91481070c935919bef9205"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:10:07 crc kubenswrapper[4814]: I0216 11:10:07.961244 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://6b750c07fe9ca28e7f9f87514229ecb4c19ab370da91481070c935919bef9205" gracePeriod=600 Feb 16 11:10:08 crc kubenswrapper[4814]: I0216 11:10:08.505373 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="6b750c07fe9ca28e7f9f87514229ecb4c19ab370da91481070c935919bef9205" exitCode=0 Feb 16 11:10:08 crc kubenswrapper[4814]: I0216 11:10:08.505461 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"6b750c07fe9ca28e7f9f87514229ecb4c19ab370da91481070c935919bef9205"} Feb 16 11:10:08 crc kubenswrapper[4814]: I0216 11:10:08.505710 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerStarted","Data":"473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a"} Feb 16 11:10:08 crc kubenswrapper[4814]: I0216 11:10:08.505737 4814 scope.go:117] "RemoveContainer" containerID="dd0d3521da6a0e24ef312c304fe6d7c62f429f6d84f85c0444826a80b2a94825" Feb 16 11:10:14 crc kubenswrapper[4814]: I0216 11:10:14.993819 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:10:14 crc kubenswrapper[4814]: E0216 11:10:14.995762 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:10:28 crc kubenswrapper[4814]: I0216 11:10:28.994520 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:10:28 crc kubenswrapper[4814]: E0216 11:10:28.995573 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:10:40 crc kubenswrapper[4814]: I0216 11:10:40.999212 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:10:41 crc kubenswrapper[4814]: E0216 11:10:41.000361 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:10:54 crc kubenswrapper[4814]: I0216 11:10:54.993769 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:10:54 crc kubenswrapper[4814]: E0216 11:10:54.994828 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:11:06 crc kubenswrapper[4814]: I0216 11:11:06.993667 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:11:06 crc kubenswrapper[4814]: E0216 11:11:06.994496 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:11:17 crc kubenswrapper[4814]: I0216 11:11:17.993366 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:11:19 crc kubenswrapper[4814]: I0216 11:11:19.302063 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerStarted","Data":"275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a"} Feb 16 11:11:22 crc kubenswrapper[4814]: I0216 11:11:22.339264 4814 generic.go:334] "Generic (PLEG): container finished" podID="47ffdfe2-41b9-4fa3-abb9-ba4f11507038" containerID="5595f4e8c378484cafa42f0033eb5369b218c182d492b5113843eeb88bf3f4dd" exitCode=0 Feb 16 11:11:22 crc kubenswrapper[4814]: I0216 11:11:22.339356 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9t2fp/must-gather-r6b42" event={"ID":"47ffdfe2-41b9-4fa3-abb9-ba4f11507038","Type":"ContainerDied","Data":"5595f4e8c378484cafa42f0033eb5369b218c182d492b5113843eeb88bf3f4dd"} Feb 16 11:11:22 crc kubenswrapper[4814]: I0216 11:11:22.341101 4814 scope.go:117] "RemoveContainer" containerID="5595f4e8c378484cafa42f0033eb5369b218c182d492b5113843eeb88bf3f4dd" Feb 16 11:11:22 crc kubenswrapper[4814]: I0216 11:11:22.346045 4814 generic.go:334] "Generic (PLEG): container finished" podID="c4396e79-fda2-435d-ae1f-f92a838ea655" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" exitCode=0 Feb 16 11:11:22 crc kubenswrapper[4814]: I0216 11:11:22.346174 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4396e79-fda2-435d-ae1f-f92a838ea655","Type":"ContainerDied","Data":"275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a"} Feb 16 11:11:22 crc kubenswrapper[4814]: I0216 11:11:22.346236 4814 scope.go:117] "RemoveContainer" containerID="cb6905ce5e530640fee37e9923697228fc1b73a8f83f4ef948d79e14555a035c" Feb 16 11:11:22 crc kubenswrapper[4814]: I0216 11:11:22.347052 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:11:22 crc kubenswrapper[4814]: E0216 11:11:22.347620 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:11:22 crc kubenswrapper[4814]: I0216 11:11:22.676643 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 11:11:22 crc kubenswrapper[4814]: I0216 11:11:22.677015 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 11:11:23 crc kubenswrapper[4814]: I0216 11:11:23.320054 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9t2fp_must-gather-r6b42_47ffdfe2-41b9-4fa3-abb9-ba4f11507038/gather/0.log" Feb 16 11:11:23 crc kubenswrapper[4814]: I0216 11:11:23.357605 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:11:23 crc kubenswrapper[4814]: E0216 11:11:23.357932 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:11:24 crc kubenswrapper[4814]: I0216 11:11:24.676870 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 11:11:24 crc kubenswrapper[4814]: I0216 11:11:24.678099 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:11:24 crc kubenswrapper[4814]: E0216 11:11:24.678653 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:11:31 crc kubenswrapper[4814]: I0216 11:11:31.242766 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9t2fp/must-gather-r6b42"] Feb 16 11:11:31 crc kubenswrapper[4814]: I0216 11:11:31.243501 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9t2fp/must-gather-r6b42" podUID="47ffdfe2-41b9-4fa3-abb9-ba4f11507038" containerName="copy" containerID="cri-o://df5817f734df2913d637c144026ed471b3b10c1a6ce10c0984ac40129ec21736" gracePeriod=2 Feb 16 11:11:31 crc kubenswrapper[4814]: I0216 11:11:31.251686 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9t2fp/must-gather-r6b42"] Feb 16 11:11:31 crc kubenswrapper[4814]: I0216 11:11:31.474851 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9t2fp_must-gather-r6b42_47ffdfe2-41b9-4fa3-abb9-ba4f11507038/copy/0.log" Feb 16 11:11:31 crc kubenswrapper[4814]: I0216 11:11:31.480396 4814 generic.go:334] "Generic (PLEG): container finished" podID="47ffdfe2-41b9-4fa3-abb9-ba4f11507038" containerID="df5817f734df2913d637c144026ed471b3b10c1a6ce10c0984ac40129ec21736" exitCode=143 Feb 16 11:11:31 crc kubenswrapper[4814]: I0216 11:11:31.788303 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9t2fp_must-gather-r6b42_47ffdfe2-41b9-4fa3-abb9-ba4f11507038/copy/0.log" Feb 16 11:11:31 crc kubenswrapper[4814]: I0216 11:11:31.789144 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/must-gather-r6b42" Feb 16 11:11:31 crc kubenswrapper[4814]: I0216 11:11:31.865878 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-must-gather-output\") pod \"47ffdfe2-41b9-4fa3-abb9-ba4f11507038\" (UID: \"47ffdfe2-41b9-4fa3-abb9-ba4f11507038\") " Feb 16 11:11:31 crc kubenswrapper[4814]: I0216 11:11:31.866155 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfpcw\" (UniqueName: \"kubernetes.io/projected/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-kube-api-access-pfpcw\") pod \"47ffdfe2-41b9-4fa3-abb9-ba4f11507038\" (UID: \"47ffdfe2-41b9-4fa3-abb9-ba4f11507038\") " Feb 16 11:11:31 crc kubenswrapper[4814]: I0216 11:11:31.878755 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-kube-api-access-pfpcw" (OuterVolumeSpecName: "kube-api-access-pfpcw") pod "47ffdfe2-41b9-4fa3-abb9-ba4f11507038" (UID: "47ffdfe2-41b9-4fa3-abb9-ba4f11507038"). InnerVolumeSpecName "kube-api-access-pfpcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:11:31 crc kubenswrapper[4814]: I0216 11:11:31.969364 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfpcw\" (UniqueName: \"kubernetes.io/projected/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-kube-api-access-pfpcw\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:32 crc kubenswrapper[4814]: I0216 11:11:32.025198 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "47ffdfe2-41b9-4fa3-abb9-ba4f11507038" (UID: "47ffdfe2-41b9-4fa3-abb9-ba4f11507038"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:11:32 crc kubenswrapper[4814]: I0216 11:11:32.071200 4814 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47ffdfe2-41b9-4fa3-abb9-ba4f11507038-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:32 crc kubenswrapper[4814]: I0216 11:11:32.492071 4814 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9t2fp_must-gather-r6b42_47ffdfe2-41b9-4fa3-abb9-ba4f11507038/copy/0.log" Feb 16 11:11:32 crc kubenswrapper[4814]: I0216 11:11:32.493150 4814 scope.go:117] "RemoveContainer" containerID="df5817f734df2913d637c144026ed471b3b10c1a6ce10c0984ac40129ec21736" Feb 16 11:11:32 crc kubenswrapper[4814]: I0216 11:11:32.493200 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9t2fp/must-gather-r6b42" Feb 16 11:11:32 crc kubenswrapper[4814]: I0216 11:11:32.534722 4814 scope.go:117] "RemoveContainer" containerID="5595f4e8c378484cafa42f0033eb5369b218c182d492b5113843eeb88bf3f4dd" Feb 16 11:11:33 crc kubenswrapper[4814]: I0216 11:11:33.028229 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47ffdfe2-41b9-4fa3-abb9-ba4f11507038" path="/var/lib/kubelet/pods/47ffdfe2-41b9-4fa3-abb9-ba4f11507038/volumes" Feb 16 11:11:35 crc kubenswrapper[4814]: I0216 11:11:35.994047 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:11:35 crc kubenswrapper[4814]: E0216 11:11:35.995012 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:11:47 crc kubenswrapper[4814]: I0216 11:11:47.994047 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:11:47 crc kubenswrapper[4814]: E0216 11:11:47.994976 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:12:03 crc kubenswrapper[4814]: I0216 11:12:02.999700 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:12:03 crc kubenswrapper[4814]: E0216 11:12:03.000368 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:12:13 crc kubenswrapper[4814]: I0216 11:12:13.992989 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:12:13 crc kubenswrapper[4814]: E0216 11:12:13.993780 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.541215 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c6j6j"] Feb 16 11:12:25 crc kubenswrapper[4814]: E0216 11:12:25.542120 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22af591d-6e88-4acb-8838-490d0afe88f6" containerName="extract-utilities" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.542137 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="22af591d-6e88-4acb-8838-490d0afe88f6" containerName="extract-utilities" Feb 16 11:12:25 crc kubenswrapper[4814]: E0216 11:12:25.542155 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22af591d-6e88-4acb-8838-490d0afe88f6" containerName="extract-content" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.542163 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="22af591d-6e88-4acb-8838-490d0afe88f6" containerName="extract-content" Feb 16 11:12:25 crc kubenswrapper[4814]: E0216 11:12:25.542181 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ffdfe2-41b9-4fa3-abb9-ba4f11507038" containerName="copy" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.542189 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ffdfe2-41b9-4fa3-abb9-ba4f11507038" containerName="copy" Feb 16 11:12:25 crc kubenswrapper[4814]: E0216 11:12:25.542217 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ffdfe2-41b9-4fa3-abb9-ba4f11507038" containerName="gather" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.542225 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ffdfe2-41b9-4fa3-abb9-ba4f11507038" containerName="gather" Feb 16 11:12:25 crc kubenswrapper[4814]: E0216 11:12:25.542243 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22af591d-6e88-4acb-8838-490d0afe88f6" containerName="registry-server" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.542249 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="22af591d-6e88-4acb-8838-490d0afe88f6" containerName="registry-server" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.542433 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="22af591d-6e88-4acb-8838-490d0afe88f6" containerName="registry-server" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.542443 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ffdfe2-41b9-4fa3-abb9-ba4f11507038" containerName="gather" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.542468 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ffdfe2-41b9-4fa3-abb9-ba4f11507038" containerName="copy" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.543864 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.564457 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6j6j"] Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.676321 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-catalog-content\") pod \"redhat-operators-c6j6j\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.676446 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-utilities\") pod \"redhat-operators-c6j6j\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.676652 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzvtm\" (UniqueName: \"kubernetes.io/projected/7f6b5d02-41b6-464b-8606-3f7dd5af627a-kube-api-access-xzvtm\") pod \"redhat-operators-c6j6j\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.778218 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-catalog-content\") pod \"redhat-operators-c6j6j\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.778313 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-utilities\") pod \"redhat-operators-c6j6j\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.778400 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzvtm\" (UniqueName: \"kubernetes.io/projected/7f6b5d02-41b6-464b-8606-3f7dd5af627a-kube-api-access-xzvtm\") pod \"redhat-operators-c6j6j\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.778730 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-catalog-content\") pod \"redhat-operators-c6j6j\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.778792 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-utilities\") pod \"redhat-operators-c6j6j\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.874292 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzvtm\" (UniqueName: \"kubernetes.io/projected/7f6b5d02-41b6-464b-8606-3f7dd5af627a-kube-api-access-xzvtm\") pod \"redhat-operators-c6j6j\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:25 crc kubenswrapper[4814]: I0216 11:12:25.993853 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:12:25 crc kubenswrapper[4814]: E0216 11:12:25.994189 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:12:26 crc kubenswrapper[4814]: I0216 11:12:26.166217 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:26 crc kubenswrapper[4814]: I0216 11:12:26.627090 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6j6j"] Feb 16 11:12:27 crc kubenswrapper[4814]: I0216 11:12:27.005513 4814 generic.go:334] "Generic (PLEG): container finished" podID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerID="72847a1e09e11cf00f20a181e71c447cd2b045a09bc6f23b06bb1fc02c047ea0" exitCode=0 Feb 16 11:12:27 crc kubenswrapper[4814]: I0216 11:12:27.005783 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6j6j" event={"ID":"7f6b5d02-41b6-464b-8606-3f7dd5af627a","Type":"ContainerDied","Data":"72847a1e09e11cf00f20a181e71c447cd2b045a09bc6f23b06bb1fc02c047ea0"} Feb 16 11:12:27 crc kubenswrapper[4814]: I0216 11:12:27.005930 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6j6j" event={"ID":"7f6b5d02-41b6-464b-8606-3f7dd5af627a","Type":"ContainerStarted","Data":"a288183dc7487dc75bb4a584b4c3ea1a58d82e956104d9bcf37f9ed7a1cfb827"} Feb 16 11:12:27 crc kubenswrapper[4814]: I0216 11:12:27.007088 4814 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:12:29 crc kubenswrapper[4814]: I0216 11:12:29.039204 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6j6j" event={"ID":"7f6b5d02-41b6-464b-8606-3f7dd5af627a","Type":"ContainerStarted","Data":"c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a"} Feb 16 11:12:30 crc kubenswrapper[4814]: I0216 11:12:30.050865 4814 generic.go:334] "Generic (PLEG): container finished" podID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerID="c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a" exitCode=0 Feb 16 11:12:30 crc kubenswrapper[4814]: I0216 11:12:30.050906 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6j6j" event={"ID":"7f6b5d02-41b6-464b-8606-3f7dd5af627a","Type":"ContainerDied","Data":"c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a"} Feb 16 11:12:31 crc kubenswrapper[4814]: I0216 11:12:31.060963 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6j6j" event={"ID":"7f6b5d02-41b6-464b-8606-3f7dd5af627a","Type":"ContainerStarted","Data":"9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b"} Feb 16 11:12:31 crc kubenswrapper[4814]: I0216 11:12:31.086660 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c6j6j" podStartSLOduration=2.440904327 podStartE2EDuration="6.086641303s" podCreationTimestamp="2026-02-16 11:12:25 +0000 UTC" firstStartedPulling="2026-02-16 11:12:27.006883937 +0000 UTC m=+5204.700040117" lastFinishedPulling="2026-02-16 11:12:30.652620913 +0000 UTC m=+5208.345777093" observedRunningTime="2026-02-16 11:12:31.078960674 +0000 UTC m=+5208.772116864" watchObservedRunningTime="2026-02-16 11:12:31.086641303 +0000 UTC m=+5208.779797483" Feb 16 11:12:36 crc kubenswrapper[4814]: I0216 11:12:36.167443 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:36 crc kubenswrapper[4814]: I0216 11:12:36.168036 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:37 crc kubenswrapper[4814]: I0216 11:12:37.217301 4814 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6j6j" podUID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerName="registry-server" probeResult="failure" output=< Feb 16 11:12:37 crc kubenswrapper[4814]: timeout: failed to connect service ":50051" within 1s Feb 16 11:12:37 crc kubenswrapper[4814]: > Feb 16 11:12:37 crc kubenswrapper[4814]: I0216 11:12:37.960093 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:12:37 crc kubenswrapper[4814]: I0216 11:12:37.960167 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:12:40 crc kubenswrapper[4814]: I0216 11:12:40.994341 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:12:40 crc kubenswrapper[4814]: E0216 11:12:40.995082 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:12:46 crc kubenswrapper[4814]: I0216 11:12:46.216780 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:46 crc kubenswrapper[4814]: I0216 11:12:46.273444 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:46 crc kubenswrapper[4814]: I0216 11:12:46.455324 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6j6j"] Feb 16 11:12:48 crc kubenswrapper[4814]: I0216 11:12:48.203516 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c6j6j" podUID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerName="registry-server" containerID="cri-o://9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b" gracePeriod=2 Feb 16 11:12:48 crc kubenswrapper[4814]: I0216 11:12:48.829390 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:48 crc kubenswrapper[4814]: I0216 11:12:48.939606 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzvtm\" (UniqueName: \"kubernetes.io/projected/7f6b5d02-41b6-464b-8606-3f7dd5af627a-kube-api-access-xzvtm\") pod \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " Feb 16 11:12:48 crc kubenswrapper[4814]: I0216 11:12:48.939648 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-utilities\") pod \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " Feb 16 11:12:48 crc kubenswrapper[4814]: I0216 11:12:48.939702 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-catalog-content\") pod \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\" (UID: \"7f6b5d02-41b6-464b-8606-3f7dd5af627a\") " Feb 16 11:12:48 crc kubenswrapper[4814]: I0216 11:12:48.940747 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-utilities" (OuterVolumeSpecName: "utilities") pod "7f6b5d02-41b6-464b-8606-3f7dd5af627a" (UID: "7f6b5d02-41b6-464b-8606-3f7dd5af627a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:12:48 crc kubenswrapper[4814]: I0216 11:12:48.949737 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6b5d02-41b6-464b-8606-3f7dd5af627a-kube-api-access-xzvtm" (OuterVolumeSpecName: "kube-api-access-xzvtm") pod "7f6b5d02-41b6-464b-8606-3f7dd5af627a" (UID: "7f6b5d02-41b6-464b-8606-3f7dd5af627a"). InnerVolumeSpecName "kube-api-access-xzvtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.041832 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzvtm\" (UniqueName: \"kubernetes.io/projected/7f6b5d02-41b6-464b-8606-3f7dd5af627a-kube-api-access-xzvtm\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.041863 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.058782 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f6b5d02-41b6-464b-8606-3f7dd5af627a" (UID: "7f6b5d02-41b6-464b-8606-3f7dd5af627a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.144292 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f6b5d02-41b6-464b-8606-3f7dd5af627a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.214343 4814 generic.go:334] "Generic (PLEG): container finished" podID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerID="9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b" exitCode=0 Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.214403 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6j6j" event={"ID":"7f6b5d02-41b6-464b-8606-3f7dd5af627a","Type":"ContainerDied","Data":"9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b"} Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.214444 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6j6j" event={"ID":"7f6b5d02-41b6-464b-8606-3f7dd5af627a","Type":"ContainerDied","Data":"a288183dc7487dc75bb4a584b4c3ea1a58d82e956104d9bcf37f9ed7a1cfb827"} Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.214448 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6j6j" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.214470 4814 scope.go:117] "RemoveContainer" containerID="9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.253552 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6j6j"] Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.255847 4814 scope.go:117] "RemoveContainer" containerID="c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.265485 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c6j6j"] Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.285455 4814 scope.go:117] "RemoveContainer" containerID="72847a1e09e11cf00f20a181e71c447cd2b045a09bc6f23b06bb1fc02c047ea0" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.333091 4814 scope.go:117] "RemoveContainer" containerID="9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b" Feb 16 11:12:49 crc kubenswrapper[4814]: E0216 11:12:49.333655 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b\": container with ID starting with 9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b not found: ID does not exist" containerID="9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.333686 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b"} err="failed to get container status \"9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b\": rpc error: code = NotFound desc = could not find container \"9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b\": container with ID starting with 9f9a1f8742f99fb86eecef6676cfc5c070dcf5b40a3e1d88d2199adf28c7a31b not found: ID does not exist" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.333708 4814 scope.go:117] "RemoveContainer" containerID="c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a" Feb 16 11:12:49 crc kubenswrapper[4814]: E0216 11:12:49.333983 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a\": container with ID starting with c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a not found: ID does not exist" containerID="c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.334014 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a"} err="failed to get container status \"c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a\": rpc error: code = NotFound desc = could not find container \"c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a\": container with ID starting with c78e335b0a6201bf2ab7cd8cad8a2b984148ac473d39ace9bfe6ed4543e26f8a not found: ID does not exist" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.334033 4814 scope.go:117] "RemoveContainer" containerID="72847a1e09e11cf00f20a181e71c447cd2b045a09bc6f23b06bb1fc02c047ea0" Feb 16 11:12:49 crc kubenswrapper[4814]: E0216 11:12:49.334330 4814 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72847a1e09e11cf00f20a181e71c447cd2b045a09bc6f23b06bb1fc02c047ea0\": container with ID starting with 72847a1e09e11cf00f20a181e71c447cd2b045a09bc6f23b06bb1fc02c047ea0 not found: ID does not exist" containerID="72847a1e09e11cf00f20a181e71c447cd2b045a09bc6f23b06bb1fc02c047ea0" Feb 16 11:12:49 crc kubenswrapper[4814]: I0216 11:12:49.334375 4814 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72847a1e09e11cf00f20a181e71c447cd2b045a09bc6f23b06bb1fc02c047ea0"} err="failed to get container status \"72847a1e09e11cf00f20a181e71c447cd2b045a09bc6f23b06bb1fc02c047ea0\": rpc error: code = NotFound desc = could not find container \"72847a1e09e11cf00f20a181e71c447cd2b045a09bc6f23b06bb1fc02c047ea0\": container with ID starting with 72847a1e09e11cf00f20a181e71c447cd2b045a09bc6f23b06bb1fc02c047ea0 not found: ID does not exist" Feb 16 11:12:51 crc kubenswrapper[4814]: I0216 11:12:51.004985 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" path="/var/lib/kubelet/pods/7f6b5d02-41b6-464b-8606-3f7dd5af627a/volumes" Feb 16 11:12:55 crc kubenswrapper[4814]: I0216 11:12:55.995465 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:12:55 crc kubenswrapper[4814]: E0216 11:12:55.996327 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:13:06 crc kubenswrapper[4814]: I0216 11:13:06.993088 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:13:06 crc kubenswrapper[4814]: E0216 11:13:06.993721 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:13:07 crc kubenswrapper[4814]: I0216 11:13:07.960280 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:13:07 crc kubenswrapper[4814]: I0216 11:13:07.960369 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:13:18 crc kubenswrapper[4814]: I0216 11:13:18.994142 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:13:18 crc kubenswrapper[4814]: E0216 11:13:18.994881 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:13:33 crc kubenswrapper[4814]: I0216 11:13:33.002403 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:13:33 crc kubenswrapper[4814]: E0216 11:13:33.003271 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:13:37 crc kubenswrapper[4814]: I0216 11:13:37.959858 4814 patch_prober.go:28] interesting pod/machine-config-daemon-wt4c2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:13:37 crc kubenswrapper[4814]: I0216 11:13:37.960409 4814 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:13:37 crc kubenswrapper[4814]: I0216 11:13:37.960461 4814 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" Feb 16 11:13:37 crc kubenswrapper[4814]: I0216 11:13:37.961226 4814 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a"} pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:13:37 crc kubenswrapper[4814]: I0216 11:13:37.961281 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerName="machine-config-daemon" containerID="cri-o://473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" gracePeriod=600 Feb 16 11:13:38 crc kubenswrapper[4814]: E0216 11:13:38.084685 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:13:38 crc kubenswrapper[4814]: I0216 11:13:38.696471 4814 generic.go:334] "Generic (PLEG): container finished" podID="22f17e0b-afd9-459b-8451-f247a3c76a74" containerID="473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" exitCode=0 Feb 16 11:13:38 crc kubenswrapper[4814]: I0216 11:13:38.696521 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" event={"ID":"22f17e0b-afd9-459b-8451-f247a3c76a74","Type":"ContainerDied","Data":"473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a"} Feb 16 11:13:38 crc kubenswrapper[4814]: I0216 11:13:38.696593 4814 scope.go:117] "RemoveContainer" containerID="6b750c07fe9ca28e7f9f87514229ecb4c19ab370da91481070c935919bef9205" Feb 16 11:13:38 crc kubenswrapper[4814]: I0216 11:13:38.698251 4814 scope.go:117] "RemoveContainer" containerID="473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" Feb 16 11:13:38 crc kubenswrapper[4814]: E0216 11:13:38.701120 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:13:45 crc kubenswrapper[4814]: I0216 11:13:45.994322 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:13:45 crc kubenswrapper[4814]: E0216 11:13:45.995250 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:13:53 crc kubenswrapper[4814]: I0216 11:13:52.999931 4814 scope.go:117] "RemoveContainer" containerID="473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" Feb 16 11:13:53 crc kubenswrapper[4814]: E0216 11:13:53.000784 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:13:59 crc kubenswrapper[4814]: I0216 11:13:59.993998 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:13:59 crc kubenswrapper[4814]: E0216 11:13:59.994948 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:14:04 crc kubenswrapper[4814]: I0216 11:14:04.995058 4814 scope.go:117] "RemoveContainer" containerID="473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" Feb 16 11:14:04 crc kubenswrapper[4814]: E0216 11:14:04.996270 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:14:10 crc kubenswrapper[4814]: I0216 11:14:10.993633 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:14:10 crc kubenswrapper[4814]: E0216 11:14:10.994348 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:14:18 crc kubenswrapper[4814]: I0216 11:14:18.993791 4814 scope.go:117] "RemoveContainer" containerID="473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" Feb 16 11:14:18 crc kubenswrapper[4814]: E0216 11:14:18.994713 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:14:23 crc kubenswrapper[4814]: I0216 11:14:23.004786 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:14:23 crc kubenswrapper[4814]: E0216 11:14:23.008130 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:14:30 crc kubenswrapper[4814]: I0216 11:14:30.994421 4814 scope.go:117] "RemoveContainer" containerID="473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" Feb 16 11:14:30 crc kubenswrapper[4814]: E0216 11:14:30.995696 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:14:37 crc kubenswrapper[4814]: I0216 11:14:37.995682 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:14:37 crc kubenswrapper[4814]: E0216 11:14:37.997965 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:14:41 crc kubenswrapper[4814]: I0216 11:14:41.994227 4814 scope.go:117] "RemoveContainer" containerID="473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" Feb 16 11:14:41 crc kubenswrapper[4814]: E0216 11:14:41.995318 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:14:50 crc kubenswrapper[4814]: I0216 11:14:50.994226 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:14:50 crc kubenswrapper[4814]: E0216 11:14:50.995486 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:14:54 crc kubenswrapper[4814]: I0216 11:14:54.994735 4814 scope.go:117] "RemoveContainer" containerID="473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" Feb 16 11:14:54 crc kubenswrapper[4814]: E0216 11:14:54.995434 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.151697 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr"] Feb 16 11:15:00 crc kubenswrapper[4814]: E0216 11:15:00.153451 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerName="extract-content" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.153481 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerName="extract-content" Feb 16 11:15:00 crc kubenswrapper[4814]: E0216 11:15:00.153521 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerName="extract-utilities" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.153565 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerName="extract-utilities" Feb 16 11:15:00 crc kubenswrapper[4814]: E0216 11:15:00.153590 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerName="registry-server" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.153601 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerName="registry-server" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.154801 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f6b5d02-41b6-464b-8606-3f7dd5af627a" containerName="registry-server" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.156040 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.158722 4814 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.158835 4814 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.165866 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr"] Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.295299 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75931ea5-97c1-49b1-99cd-52ae3e836d7c-config-volume\") pod \"collect-profiles-29520675-d8hzr\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.296026 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75931ea5-97c1-49b1-99cd-52ae3e836d7c-secret-volume\") pod \"collect-profiles-29520675-d8hzr\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.296453 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7f8c\" (UniqueName: \"kubernetes.io/projected/75931ea5-97c1-49b1-99cd-52ae3e836d7c-kube-api-access-z7f8c\") pod \"collect-profiles-29520675-d8hzr\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.398312 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7f8c\" (UniqueName: \"kubernetes.io/projected/75931ea5-97c1-49b1-99cd-52ae3e836d7c-kube-api-access-z7f8c\") pod \"collect-profiles-29520675-d8hzr\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.398392 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75931ea5-97c1-49b1-99cd-52ae3e836d7c-config-volume\") pod \"collect-profiles-29520675-d8hzr\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.398452 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75931ea5-97c1-49b1-99cd-52ae3e836d7c-secret-volume\") pod \"collect-profiles-29520675-d8hzr\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.399430 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75931ea5-97c1-49b1-99cd-52ae3e836d7c-config-volume\") pod \"collect-profiles-29520675-d8hzr\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.407318 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75931ea5-97c1-49b1-99cd-52ae3e836d7c-secret-volume\") pod \"collect-profiles-29520675-d8hzr\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.417432 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7f8c\" (UniqueName: \"kubernetes.io/projected/75931ea5-97c1-49b1-99cd-52ae3e836d7c-kube-api-access-z7f8c\") pod \"collect-profiles-29520675-d8hzr\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.482853 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:00 crc kubenswrapper[4814]: I0216 11:15:00.940050 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr"] Feb 16 11:15:01 crc kubenswrapper[4814]: I0216 11:15:01.499728 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" event={"ID":"75931ea5-97c1-49b1-99cd-52ae3e836d7c","Type":"ContainerStarted","Data":"fb4667bf3f343a1d07458d527c42bda1739d98abe7dc1b2b147de36559f6a678"} Feb 16 11:15:01 crc kubenswrapper[4814]: I0216 11:15:01.500503 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" event={"ID":"75931ea5-97c1-49b1-99cd-52ae3e836d7c","Type":"ContainerStarted","Data":"ddfb735e6760c4bdb33fe99a6de2f9217df3781882ba6a9d826ea831ce7237cb"} Feb 16 11:15:01 crc kubenswrapper[4814]: I0216 11:15:01.993597 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:15:01 crc kubenswrapper[4814]: E0216 11:15:01.994141 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:15:02 crc kubenswrapper[4814]: I0216 11:15:02.509653 4814 generic.go:334] "Generic (PLEG): container finished" podID="75931ea5-97c1-49b1-99cd-52ae3e836d7c" containerID="fb4667bf3f343a1d07458d527c42bda1739d98abe7dc1b2b147de36559f6a678" exitCode=0 Feb 16 11:15:02 crc kubenswrapper[4814]: I0216 11:15:02.509702 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" event={"ID":"75931ea5-97c1-49b1-99cd-52ae3e836d7c","Type":"ContainerDied","Data":"fb4667bf3f343a1d07458d527c42bda1739d98abe7dc1b2b147de36559f6a678"} Feb 16 11:15:03 crc kubenswrapper[4814]: I0216 11:15:03.828479 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:03 crc kubenswrapper[4814]: I0216 11:15:03.979570 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75931ea5-97c1-49b1-99cd-52ae3e836d7c-secret-volume\") pod \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " Feb 16 11:15:03 crc kubenswrapper[4814]: I0216 11:15:03.980078 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7f8c\" (UniqueName: \"kubernetes.io/projected/75931ea5-97c1-49b1-99cd-52ae3e836d7c-kube-api-access-z7f8c\") pod \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " Feb 16 11:15:03 crc kubenswrapper[4814]: I0216 11:15:03.980307 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75931ea5-97c1-49b1-99cd-52ae3e836d7c-config-volume\") pod \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\" (UID: \"75931ea5-97c1-49b1-99cd-52ae3e836d7c\") " Feb 16 11:15:03 crc kubenswrapper[4814]: I0216 11:15:03.980978 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75931ea5-97c1-49b1-99cd-52ae3e836d7c-config-volume" (OuterVolumeSpecName: "config-volume") pod "75931ea5-97c1-49b1-99cd-52ae3e836d7c" (UID: "75931ea5-97c1-49b1-99cd-52ae3e836d7c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:15:03 crc kubenswrapper[4814]: I0216 11:15:03.985694 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75931ea5-97c1-49b1-99cd-52ae3e836d7c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75931ea5-97c1-49b1-99cd-52ae3e836d7c" (UID: "75931ea5-97c1-49b1-99cd-52ae3e836d7c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:15:03 crc kubenswrapper[4814]: I0216 11:15:03.985731 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75931ea5-97c1-49b1-99cd-52ae3e836d7c-kube-api-access-z7f8c" (OuterVolumeSpecName: "kube-api-access-z7f8c") pod "75931ea5-97c1-49b1-99cd-52ae3e836d7c" (UID: "75931ea5-97c1-49b1-99cd-52ae3e836d7c"). InnerVolumeSpecName "kube-api-access-z7f8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:15:04 crc kubenswrapper[4814]: I0216 11:15:04.082921 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7f8c\" (UniqueName: \"kubernetes.io/projected/75931ea5-97c1-49b1-99cd-52ae3e836d7c-kube-api-access-z7f8c\") on node \"crc\" DevicePath \"\"" Feb 16 11:15:04 crc kubenswrapper[4814]: I0216 11:15:04.082961 4814 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75931ea5-97c1-49b1-99cd-52ae3e836d7c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:15:04 crc kubenswrapper[4814]: I0216 11:15:04.082975 4814 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75931ea5-97c1-49b1-99cd-52ae3e836d7c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:15:04 crc kubenswrapper[4814]: I0216 11:15:04.531797 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" event={"ID":"75931ea5-97c1-49b1-99cd-52ae3e836d7c","Type":"ContainerDied","Data":"ddfb735e6760c4bdb33fe99a6de2f9217df3781882ba6a9d826ea831ce7237cb"} Feb 16 11:15:04 crc kubenswrapper[4814]: I0216 11:15:04.531836 4814 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddfb735e6760c4bdb33fe99a6de2f9217df3781882ba6a9d826ea831ce7237cb" Feb 16 11:15:04 crc kubenswrapper[4814]: I0216 11:15:04.531905 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-d8hzr" Feb 16 11:15:04 crc kubenswrapper[4814]: I0216 11:15:04.590451 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv"] Feb 16 11:15:04 crc kubenswrapper[4814]: I0216 11:15:04.600880 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520630-tqnxv"] Feb 16 11:15:05 crc kubenswrapper[4814]: I0216 11:15:05.009332 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da1e3427-8e98-4cc8-ad68-5af087a8443f" path="/var/lib/kubelet/pods/da1e3427-8e98-4cc8-ad68-5af087a8443f/volumes" Feb 16 11:15:07 crc kubenswrapper[4814]: I0216 11:15:07.994692 4814 scope.go:117] "RemoveContainer" containerID="473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" Feb 16 11:15:07 crc kubenswrapper[4814]: E0216 11:15:07.995141 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:15:15 crc kubenswrapper[4814]: I0216 11:15:15.003174 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:15:15 crc kubenswrapper[4814]: E0216 11:15:15.005702 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:15:19 crc kubenswrapper[4814]: I0216 11:15:19.815399 4814 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hqrjj"] Feb 16 11:15:19 crc kubenswrapper[4814]: E0216 11:15:19.816437 4814 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75931ea5-97c1-49b1-99cd-52ae3e836d7c" containerName="collect-profiles" Feb 16 11:15:19 crc kubenswrapper[4814]: I0216 11:15:19.816449 4814 state_mem.go:107] "Deleted CPUSet assignment" podUID="75931ea5-97c1-49b1-99cd-52ae3e836d7c" containerName="collect-profiles" Feb 16 11:15:19 crc kubenswrapper[4814]: I0216 11:15:19.818036 4814 memory_manager.go:354] "RemoveStaleState removing state" podUID="75931ea5-97c1-49b1-99cd-52ae3e836d7c" containerName="collect-profiles" Feb 16 11:15:19 crc kubenswrapper[4814]: I0216 11:15:19.819836 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:19 crc kubenswrapper[4814]: I0216 11:15:19.835995 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqrjj"] Feb 16 11:15:19 crc kubenswrapper[4814]: I0216 11:15:19.906397 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-catalog-content\") pod \"certified-operators-hqrjj\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:19 crc kubenswrapper[4814]: I0216 11:15:19.906989 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzjtg\" (UniqueName: \"kubernetes.io/projected/48808842-6c5b-422a-a60e-7a5d7d036875-kube-api-access-mzjtg\") pod \"certified-operators-hqrjj\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:19 crc kubenswrapper[4814]: I0216 11:15:19.907551 4814 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-utilities\") pod \"certified-operators-hqrjj\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:20 crc kubenswrapper[4814]: I0216 11:15:20.009005 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-catalog-content\") pod \"certified-operators-hqrjj\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:20 crc kubenswrapper[4814]: I0216 11:15:20.009370 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzjtg\" (UniqueName: \"kubernetes.io/projected/48808842-6c5b-422a-a60e-7a5d7d036875-kube-api-access-mzjtg\") pod \"certified-operators-hqrjj\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:20 crc kubenswrapper[4814]: I0216 11:15:20.009515 4814 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-utilities\") pod \"certified-operators-hqrjj\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:20 crc kubenswrapper[4814]: I0216 11:15:20.010097 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-utilities\") pod \"certified-operators-hqrjj\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:20 crc kubenswrapper[4814]: I0216 11:15:20.010374 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-catalog-content\") pod \"certified-operators-hqrjj\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:20 crc kubenswrapper[4814]: I0216 11:15:20.047347 4814 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzjtg\" (UniqueName: \"kubernetes.io/projected/48808842-6c5b-422a-a60e-7a5d7d036875-kube-api-access-mzjtg\") pod \"certified-operators-hqrjj\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:20 crc kubenswrapper[4814]: I0216 11:15:20.148890 4814 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:20 crc kubenswrapper[4814]: I0216 11:15:20.665856 4814 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqrjj"] Feb 16 11:15:20 crc kubenswrapper[4814]: I0216 11:15:20.993993 4814 scope.go:117] "RemoveContainer" containerID="473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" Feb 16 11:15:20 crc kubenswrapper[4814]: E0216 11:15:20.994342 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" Feb 16 11:15:21 crc kubenswrapper[4814]: W0216 11:15:21.080063 4814 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48808842_6c5b_422a_a60e_7a5d7d036875.slice/crio-6bd1e6c79aeeb6141e408d612cbae51ab92e418b0309ab79939983fdfedee654 WatchSource:0}: Error finding container 6bd1e6c79aeeb6141e408d612cbae51ab92e418b0309ab79939983fdfedee654: Status 404 returned error can't find the container with id 6bd1e6c79aeeb6141e408d612cbae51ab92e418b0309ab79939983fdfedee654 Feb 16 11:15:21 crc kubenswrapper[4814]: I0216 11:15:21.693736 4814 generic.go:334] "Generic (PLEG): container finished" podID="48808842-6c5b-422a-a60e-7a5d7d036875" containerID="cc64b1c98d7aba2c8dbfbe5433b31573eda208c7350a7a4d405b84ba2d7caec1" exitCode=0 Feb 16 11:15:21 crc kubenswrapper[4814]: I0216 11:15:21.694169 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqrjj" event={"ID":"48808842-6c5b-422a-a60e-7a5d7d036875","Type":"ContainerDied","Data":"cc64b1c98d7aba2c8dbfbe5433b31573eda208c7350a7a4d405b84ba2d7caec1"} Feb 16 11:15:21 crc kubenswrapper[4814]: I0216 11:15:21.694203 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqrjj" event={"ID":"48808842-6c5b-422a-a60e-7a5d7d036875","Type":"ContainerStarted","Data":"6bd1e6c79aeeb6141e408d612cbae51ab92e418b0309ab79939983fdfedee654"} Feb 16 11:15:22 crc kubenswrapper[4814]: I0216 11:15:22.703183 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqrjj" event={"ID":"48808842-6c5b-422a-a60e-7a5d7d036875","Type":"ContainerStarted","Data":"512e2bacfb2aa885b85bb324eba476563f4f4a1facbc0b588e8f42645625e14f"} Feb 16 11:15:24 crc kubenswrapper[4814]: I0216 11:15:24.722358 4814 generic.go:334] "Generic (PLEG): container finished" podID="48808842-6c5b-422a-a60e-7a5d7d036875" containerID="512e2bacfb2aa885b85bb324eba476563f4f4a1facbc0b588e8f42645625e14f" exitCode=0 Feb 16 11:15:24 crc kubenswrapper[4814]: I0216 11:15:24.722426 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqrjj" event={"ID":"48808842-6c5b-422a-a60e-7a5d7d036875","Type":"ContainerDied","Data":"512e2bacfb2aa885b85bb324eba476563f4f4a1facbc0b588e8f42645625e14f"} Feb 16 11:15:25 crc kubenswrapper[4814]: I0216 11:15:25.736292 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqrjj" event={"ID":"48808842-6c5b-422a-a60e-7a5d7d036875","Type":"ContainerStarted","Data":"d730c2d002f1b882a943c53f86906cb29bdc4165fe7e831ca687d545726ab34b"} Feb 16 11:15:25 crc kubenswrapper[4814]: I0216 11:15:25.760381 4814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hqrjj" podStartSLOduration=3.029029541 podStartE2EDuration="6.760359093s" podCreationTimestamp="2026-02-16 11:15:19 +0000 UTC" firstStartedPulling="2026-02-16 11:15:21.696826059 +0000 UTC m=+5379.389982249" lastFinishedPulling="2026-02-16 11:15:25.428155621 +0000 UTC m=+5383.121311801" observedRunningTime="2026-02-16 11:15:25.752079178 +0000 UTC m=+5383.445235378" watchObservedRunningTime="2026-02-16 11:15:25.760359093 +0000 UTC m=+5383.453515273" Feb 16 11:15:28 crc kubenswrapper[4814]: I0216 11:15:28.993461 4814 scope.go:117] "RemoveContainer" containerID="275f493820fc0309144e9f8c53b90901905f4fce43e07f40afe458657c92ef7a" Feb 16 11:15:28 crc kubenswrapper[4814]: E0216 11:15:28.994659 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=cinder-scheduler pod=cinder-scheduler-0_openstack(c4396e79-fda2-435d-ae1f-f92a838ea655)\"" pod="openstack/cinder-scheduler-0" podUID="c4396e79-fda2-435d-ae1f-f92a838ea655" Feb 16 11:15:30 crc kubenswrapper[4814]: I0216 11:15:30.149933 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:30 crc kubenswrapper[4814]: I0216 11:15:30.150332 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:31 crc kubenswrapper[4814]: I0216 11:15:31.024789 4814 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:31 crc kubenswrapper[4814]: I0216 11:15:31.074773 4814 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:31 crc kubenswrapper[4814]: I0216 11:15:31.271505 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hqrjj"] Feb 16 11:15:32 crc kubenswrapper[4814]: I0216 11:15:32.794517 4814 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hqrjj" podUID="48808842-6c5b-422a-a60e-7a5d7d036875" containerName="registry-server" containerID="cri-o://d730c2d002f1b882a943c53f86906cb29bdc4165fe7e831ca687d545726ab34b" gracePeriod=2 Feb 16 11:15:33 crc kubenswrapper[4814]: I0216 11:15:33.814807 4814 generic.go:334] "Generic (PLEG): container finished" podID="48808842-6c5b-422a-a60e-7a5d7d036875" containerID="d730c2d002f1b882a943c53f86906cb29bdc4165fe7e831ca687d545726ab34b" exitCode=0 Feb 16 11:15:33 crc kubenswrapper[4814]: I0216 11:15:33.814987 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqrjj" event={"ID":"48808842-6c5b-422a-a60e-7a5d7d036875","Type":"ContainerDied","Data":"d730c2d002f1b882a943c53f86906cb29bdc4165fe7e831ca687d545726ab34b"} Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.066111 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.227005 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzjtg\" (UniqueName: \"kubernetes.io/projected/48808842-6c5b-422a-a60e-7a5d7d036875-kube-api-access-mzjtg\") pod \"48808842-6c5b-422a-a60e-7a5d7d036875\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.228498 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-utilities\") pod \"48808842-6c5b-422a-a60e-7a5d7d036875\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.228813 4814 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-catalog-content\") pod \"48808842-6c5b-422a-a60e-7a5d7d036875\" (UID: \"48808842-6c5b-422a-a60e-7a5d7d036875\") " Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.229373 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-utilities" (OuterVolumeSpecName: "utilities") pod "48808842-6c5b-422a-a60e-7a5d7d036875" (UID: "48808842-6c5b-422a-a60e-7a5d7d036875"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.230009 4814 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.237327 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48808842-6c5b-422a-a60e-7a5d7d036875-kube-api-access-mzjtg" (OuterVolumeSpecName: "kube-api-access-mzjtg") pod "48808842-6c5b-422a-a60e-7a5d7d036875" (UID: "48808842-6c5b-422a-a60e-7a5d7d036875"). InnerVolumeSpecName "kube-api-access-mzjtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.284796 4814 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48808842-6c5b-422a-a60e-7a5d7d036875" (UID: "48808842-6c5b-422a-a60e-7a5d7d036875"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.333013 4814 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48808842-6c5b-422a-a60e-7a5d7d036875-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.333046 4814 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzjtg\" (UniqueName: \"kubernetes.io/projected/48808842-6c5b-422a-a60e-7a5d7d036875-kube-api-access-mzjtg\") on node \"crc\" DevicePath \"\"" Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.826663 4814 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqrjj" event={"ID":"48808842-6c5b-422a-a60e-7a5d7d036875","Type":"ContainerDied","Data":"6bd1e6c79aeeb6141e408d612cbae51ab92e418b0309ab79939983fdfedee654"} Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.826732 4814 scope.go:117] "RemoveContainer" containerID="d730c2d002f1b882a943c53f86906cb29bdc4165fe7e831ca687d545726ab34b" Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.826748 4814 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqrjj" Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.856233 4814 scope.go:117] "RemoveContainer" containerID="512e2bacfb2aa885b85bb324eba476563f4f4a1facbc0b588e8f42645625e14f" Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.888650 4814 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hqrjj"] Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.901203 4814 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hqrjj"] Feb 16 11:15:34 crc kubenswrapper[4814]: I0216 11:15:34.902057 4814 scope.go:117] "RemoveContainer" containerID="cc64b1c98d7aba2c8dbfbe5433b31573eda208c7350a7a4d405b84ba2d7caec1" Feb 16 11:15:35 crc kubenswrapper[4814]: I0216 11:15:35.009073 4814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48808842-6c5b-422a-a60e-7a5d7d036875" path="/var/lib/kubelet/pods/48808842-6c5b-422a-a60e-7a5d7d036875/volumes" Feb 16 11:15:35 crc kubenswrapper[4814]: I0216 11:15:35.994001 4814 scope.go:117] "RemoveContainer" containerID="473275adfffeb058c2c50432f5ba142e04aa9cde50ca632fac03c4990d57269a" Feb 16 11:15:35 crc kubenswrapper[4814]: E0216 11:15:35.994587 4814 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wt4c2_openshift-machine-config-operator(22f17e0b-afd9-459b-8451-f247a3c76a74)\"" pod="openshift-machine-config-operator/machine-config-daemon-wt4c2" podUID="22f17e0b-afd9-459b-8451-f247a3c76a74" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515144576146024462 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015144576147017400 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015144563142016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015144563142015462 5ustar corecore